Data Reduction Functions for the Langley 14- by 22-Foot Subsonic Tunnel
NASA Technical Reports Server (NTRS)
Boney, Andy D.
2014-01-01
The Langley 14- by 22-Foot Subsonic Tunnel's data reduction software utilizes six major functions to compute the acquired data. These functions calculate engineering units, tunnel parameters, flowmeters, jet exhaust measurements, balance loads/model attitudes, and model /wall pressures. The input (required) variables, the output (computed) variables, and the equations and/or subfunction(s) associated with each major function are discussed.
Estuaries in the Pacific Northwest have major intraannual and within estuary variation in sources and magnitudes of nutrient inputs. To develop an approach for setting nutrient criteria for these systems, we conducted a case study for Yaquina Bay, OR based on a synthesis of resea...
A Longitudinal Study of Occupational Aspirations and Attainments of Iowa Young Adults.
ERIC Educational Resources Information Center
Yoesting, Dean R.; And Others
The causal linkage between socioeconomic status, occupational and educational aspiration, and attainment was examined in this attempt to test an existing theoretical model which used socioeconomic status as a major input variable, with significant other influence as a crucial intervening variable between socioeconomic status and aspiration. The…
Aitkenhead, Matt J; Black, Helaina I J
2018-02-01
Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2 > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.
ERIC Educational Resources Information Center
Chai, Ching Sing; Wong, Lung-Hsiang; Sim, Seok Hwa; Deng, Feng
2012-01-01
Computer-based writing is already a norm to a large extent in social communication for any major language around the world. From this perspective, it would be pedagogically sound for students to master the Chinese input system as early as possible. This poses some challenges to students in Singapore, most of which are learning Chinese as a second…
Inputs and Student Achievement: An Analysis of Latina/o-Serving Urban Elementary Schools
ERIC Educational Resources Information Center
Heilig, Julian Vasquez; Williams, Amy; Jez, Su Jin
2010-01-01
One of the most pressing problems in the United States is improving student academic performance, especially the nation's burgeoning Latina/o student population. There is a dearth of research on variables associated with student achievement in Latina/o majority schools in urban districts. As the majority of Latina/o students are segregated into…
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Petroleum Release Assessment and Impacts of Weather Extremes
Contaminated ground water and vapor intrusion are two major exposure pathways of concern at petroleum release sites. EPA has recently developed a model for petroleum vapor intrusion, called PVIScreen, which incorporates variability and uncertainty in input parameters. This ap...
Decina, Stephen M; Templer, Pamela H; Hutyra, Lucy R; Gately, Conor K; Rao, Preeti
2017-12-31
Atmospheric deposition of nitrogen (N) is a major input of N to the biosphere and is elevated beyond preindustrial levels throughout many ecosystems. Deposition monitoring networks in the United States generally avoid urban areas in order to capture regional patterns of N deposition, and studies measuring N deposition in cities usually include only one or two urban sites in an urban-rural comparison or as an anchor along an urban-to-rural gradient. Describing patterns and drivers of atmospheric N inputs is crucial for understanding the effects of N deposition; however, little is known about the variability and drivers of atmospheric N inputs or their effects on soil biogeochemistry within urban ecosystems. We measured rates of canopy throughfall N as a measure of atmospheric N inputs, as well as soil net N mineralization and nitrification, soil solution N, and soil respiration at 15 sites across the greater Boston, Massachusetts area. Rates of throughfall N are 8.70±0.68kgNha -1 yr -1 , vary 3.5-fold across sites, and are positively correlated with rates of local vehicle N emissions. Ammonium (NH 4 + ) composes 69.9±2.2% of inorganic throughfall N inputs and is highest in late spring, suggesting a contribution from local fertilizer inputs. Soil solution NO 3 - is positively correlated with throughfall NO 3 - inputs. In contrast, soil solution NH 4 + , net N mineralization, nitrification, and soil respiration are not correlated with rates of throughfall N inputs. Rather, these processes are correlated with soil properties such as soil organic matter. Our results demonstrate high variability in rates of urban throughfall N inputs, correlation of throughfall N inputs with local vehicle N emissions, and a decoupling of urban soil biogeochemistry and throughfall N inputs. Copyright © 2017 Elsevier B.V. All rights reserved.
Whiteway, Matthew R; Butts, Daniel A
2017-03-01
The activity of sensory cortical neurons is not only driven by external stimuli but also shaped by other sources of input to the cortex. Unlike external stimuli, these other sources of input are challenging to experimentally control, or even observe, and as a result contribute to variability of neural responses to sensory stimuli. However, such sources of input are likely not "noise" and may play an integral role in sensory cortex function. Here we introduce the rectified latent variable model (RLVM) in order to identify these sources of input using simultaneously recorded cortical neuron populations. The RLVM is novel in that it employs nonnegative (rectified) latent variables and is much less restrictive in the mathematical constraints on solutions because of the use of an autoencoder neural network to initialize model parameters. We show that the RLVM outperforms principal component analysis, factor analysis, and independent component analysis, using simulated data across a range of conditions. We then apply this model to two-photon imaging of hundreds of simultaneously recorded neurons in mouse primary somatosensory cortex during a tactile discrimination task. Across many experiments, the RLVM identifies latent variables related to both the tactile stimulation as well as nonstimulus aspects of the behavioral task, with a majority of activity explained by the latter. These results suggest that properly identifying such latent variables is necessary for a full understanding of sensory cortical function and demonstrate novel methods for leveraging large population recordings to this end. NEW & NOTEWORTHY The rapid development of neural recording technologies presents new opportunities for understanding patterns of activity across neural populations. Here we show how a latent variable model with appropriate nonlinear form can be used to identify sources of input to a neural population and infer their time courses. Furthermore, we demonstrate how these sources are related to behavioral contexts outside of direct experimental control. Copyright © 2017 the American Physiological Society.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
New control concepts for uncertain water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming
1993-06-01
A major complicating factor in water resources systems management is handling unknown inputs. Stochastic optimization provides a sound mathematical framework but requires that enough data exist to develop statistical input representations. In cases where data records are insufficient (e.g., extreme events) or atypical of future input realizations, stochastic methods are inadequate. This article presents a control approach where input variables are only expected to belong in certain sets. The objective is to determine sets of admissible control actions guaranteeing that the system will remain within desirable bounds. The solution is based on dynamic programming and derived for the case where all sets are convex polyhedra. A companion paper (Yao and Georgakakos, this issue) addresses specific applications and problems in relation to reservoir system management.
NITRATE VARIABILITY ALONG THE OREGON COAST: ESTUARINE-COASTAL EXCHANGE
Coastal upwelling along the Eastern Pacific provides a major source of nutrients to nearby bays and estuaries during the summer months. To quantify the coastal ocean nitrogen input to Yaquina Bay, Oregon, nitrate concentrations were measured hourly from a moored sensor during sum...
Monitoring landscape influence on nearshore condition
A major source of stress to the Great Lakes comes from tributary and landscape run-off. The large number of watersheds and the disparate landuse within them create variability in the tributary input along the extent of the nearshore. Identifying the local or regional response t...
Data Processing Aspects of MEDLARS
Austin, Charles J.
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287
DATA PROCESSING ASPECTS OF MEDLARS.
AUSTIN, C J
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.
Nonlinear Dynamic Models in Advanced Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Liu, Xiaona; Zhang, Qiao; Wu, Zhisheng; Shi, Xinyuan; Zhao, Na; Qiao, Yanjiang
2015-01-01
Laser-induced breakdown spectroscopy (LIBS) was applied to perform a rapid elemental analysis and provenance study of Blumea balsamifera DC. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were implemented to exploit the multivariate nature of the LIBS data. Scores and loadings of computed principal components visually illustrated the differing spectral data. The PLS-DA algorithm showed good classification performance. The PLS-DA model using complete spectra as input variables had similar discrimination performance to using selected spectral lines as input variables. The down-selection of spectral lines was specifically focused on the major elements of B. balsamifera samples. Results indicated that LIBS could be used to rapidly analyze elements and to perform provenance study of B. balsamifera. PMID:25558999
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubois, P.F.
1989-05-16
This paper discusses the basis system. Basis is a program development system for scientific programs. It has been developed over the last five years at Lawrence Livermore National Laboratory (LLNL), where it is now used in about twenty major programming efforts. The Basis System includes two major components, a program development system and a run-time package. The run-time package provides the Basis Language interpreter, through which the user does input, output, plotting, and control of the program's subroutines and functions. Variables in the scientific packages are known to this interpreter, so that the user may arbitrarily print, plot, and calculatemore » with, any major program variables. Also provided are facilities for dynamic memory management, terminal logs, error recovery, text-file i/o, and the attachment of non-Basis-developed packages.« less
Community models for wildlife impact assessment: a review of concepts and approaches
Schroeder, Richard L.
1987-01-01
The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
NASA Astrophysics Data System (ADS)
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-04-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3-) input functions by characterizing unsaturated zone NO3- transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous "vertical flux method" (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3- source concentration factor (which determines the local NO3- input concentration); unsaturated zone travel time; NO3- concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3- "extinction depth", the eventual steady state depth of the NO3- front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 - 0.86 and 0.22 - 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3- at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3- extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3- transport processes in a statistical framework based on readily mappable GIS input variables.
When Can Information from Ordinal Scale Variables Be Integrated?
ERIC Educational Resources Information Center
Kemp, Simon; Grace, Randolph C.
2010-01-01
Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…
Evaluating the effectiveness of intercultural teachers.
Cox, Kathleen
2011-01-01
With globalization and major immigration flows, intercultural teaching encounters are likely to increase, along with the need to assure intercultural teaching effectiveness.Thus, the purpose of this article is to present a conceptual framework for nurse educators to consider when anticipating an intercultural teaching experience. Kirkpatrick's and Bushnell's models provide a basis for the conceptual framework. Major concepts of the model include input, process, output, and outcome.The model may possibly be used to guide future research to determine which variables are most influential in explaining intercultural teaching effectiveness.
Morphological properties of vestibulospinal neurons in primates
NASA Technical Reports Server (NTRS)
Boyle, Richard; Johanson, Curt
2003-01-01
The lateral and medial vestibulospinal tracts constitute the major descending pathways controlling extensor musculature of the body. We examined the axon morphology and synaptic input patterns and targets in the cervical spinal segments from these tract cells using intracellular recording and biocytin labeling in the squirrel monkey. Lumbosacral projecting cells represent a private, and mostly rapid, communication pathway between the dorsal Deiters' nucleus and the motor circuits controlling the lower limbs and tail. The cervical projecting cells provide both redundant and variable synaptic input to spinal cell groups, suggesting both general and specific control of the head and neck reflexes.
Wind effect on salt transport variability in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Sandeep, K. K.; Pant, V.
2017-12-01
The Bay of Bengal (BoB) exhibits large spatial variability in sea surface salinity (SSS) pattern caused by its unique hydrological, meteorological and oceanographical characteristics. This SSS variability is largely controlled by the seasonally reversing monsoon winds and the associated currents. Further, the BoB receives substantial freshwater inputs through excess precipitation over evaporation and river discharge. Rivers like Ganges, Brahmaputra, Mahanadi, Krishna, Godavari, and Irawwady discharge annually a freshwater volume in range between 1.5 x 1012 and 1.83 x 1013 m3 into the bay. A major volume of this freshwater input to the bay occurs during the southwest monsoon (June-September) period. In the present study, a relative role of winds in the SSS variability in the bay is investigated by using an eddy-resolving three dimensional Regional Ocean Modeling System (ROMS) numerical model. The model is configured with realistic bathymetry, coastline of study region and forced with daily climatology of atmospheric variables. River discharges from the major rivers are distributed in the model grid points representing their respective geographic locations. Salt transport estimate from the model simulation for realistic case are compared with the standard reference datasets. Further, different experiments were carried out with idealized surface wind forcing representing the normal, low, high, and very high wind speed conditions in the bay while retaining the realistic daily varying directions for all the cases. The experimental simulations exhibit distinct dispersal patterns of the freshwater plume and SSS in different experiments in response to the idealized winds. Comparison of the meridional and zonal surface salt transport estimated for each experiment showed strong seasonality with varying magnitude in the bay with a maximum spatial and temporal variability in the western and northern parts of the BoB.
ERIC Educational Resources Information Center
Finch, Harold L.; Tatham, Elaine L.
This document presents a modified cohort survival model which can be of use in making enrollment projections. The model begins by analytically profiling an area's residents. Each person's demographic characteristics--sex, age, place of residence--are recorded in the computer memory. Four major input variables are then incorporated into the model:…
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-01-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3−) input functions by characterizing unsaturated zone NO3− transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous “vertical flux method” (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3− source concentration factor (which determines the local NO3− input concentration); unsaturated zone travel time; NO3− concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3− “extinction depth”, the eventual steady state depth of the NO3−front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 – 0.86 and 0.22 – 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3− at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3− extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3− transport processes in a statistical framework based on readily mappable GIS input variables.
ERIC Educational Resources Information Center
Csizér, Kata; Tankó, Gyula
2017-01-01
Apart from L2 motivation, self-regulation is also increasingly seen as a key variable in L2 learning in many foreign language learning contexts because classroom-centered instructive language teaching might not be able to provide sufficient input for students. Therefore, taking responsibility and regulating the learning processes and positive…
NASA Astrophysics Data System (ADS)
Dumedah, Gift; Walker, Jeffrey P.
2017-03-01
The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.
Annual variability of PAH concentrations in the Potomac River watershed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maher, I.L.; Foster, G.D.
1995-12-31
Dynamics of organic contaminant transport in a large river system is influenced by annual variability in organic contaminant concentrations. Surface runoff and groundwater input control the flow of river waters. They are also the two major inputs of contaminants to river waters. The annual variability of contaminant concentrations in rivers may or may not represent similar trends to the flow changes of river waters. The purpose of the research is to define the annual variability in concentrations of polycyclic aromatic hydrocarbons (PAH) in riverine environment. To accomplish this, from March 1992 to March 1995 samples of Potomac River water weremore » collected monthly or bimonthly downstream of the Chesapeake Bay fall line (Chain Bridge) during base flow and main storm flow hydrologic conditions. Concentrations of selected PAHs were measured in the dissolved phase and the particulate phase via GC/MS. The study of the annual variability of PAH concentrations will be performed through comparisons of PAH concentrations seasonally, annually, and through study of PAH concentration river discharge dependency and rainfall dependency. For selected PAHs monthly and annual loadings will be estimated based on their measured concentrations and average daily river discharge. The monthly loadings of selected PAHs will be compared by seasons and annually.« less
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
Zanotti-Fregonara, Paolo; Hines, Christina S; Zoghbi, Sami S; Liow, Jeih-San; Zhang, Yi; Pike, Victor W; Drevets, Wayne C; Mallinger, Alan G; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B
2012-11-15
Quantitative PET studies of neuroreceptor tracers typically require that arterial input function be measured. The aim of this study was to explore the use of a population-based input function (PBIF) and an image-derived input function (IDIF) for [(11)C](R)-rolipram kinetic analysis, with the goal of reducing - and possibly eliminating - the number of arterial blood samples needed to measure parent radioligand concentrations. A PBIF was first generated using [(11)C](R)-rolipram parent time-activity curves from 12 healthy volunteers (Group 1). Both invasive (blood samples) and non-invasive (body weight, body surface area, and lean body mass) scaling methods for PBIF were tested. The scaling method that gave the best estimate of the Logan-V(T) values was then used to determine the test-retest variability of PBIF in Group 1 and then prospectively applied to another population of 25 healthy subjects (Group 2), as well as to a population of 26 patients with major depressive disorder (Group 3). Results were also compared to those obtained with an image-derived input function (IDIF) from the internal carotid artery. In some subjects, we measured arteriovenous differences in [(11)C](R)-rolipram concentration to see whether venous samples could be used instead of arterial samples. Finally, we assessed the ability of IDIF and PBIF to discriminate depressed patients (MDD) and healthy subjects. Arterial blood-scaled PBIF gave better results than any non-invasive scaling technique. Excellent results were obtained when the blood-scaled PBIF was prospectively applied to the subjects in Group 2 (V(T) ratio 1.02±0.05; mean±SD) and Group 3 (V(T) ratio 1.03±0.04). Equally accurate results were obtained for two subpopulations of subjects drawn from Groups 2 and 3 who had very differently shaped (i.e. "flatter" or "steeper") input functions compared to PBIF (V(T) ratio 1.07±0.04 and 0.99±0.04, respectively). Results obtained via PBIF were equivalent to those obtained via IDIF (V(T) ratio 0.99±0.05 and 1.00±0.04 for healthy subjects and MDD patients, respectively). Retest variability of PBIF was equivalent to that obtained with full input function and IDIF (14.5%, 15.2%, and 14.1%, respectively). Due to [(11)C](R)-rolipram arteriovenous differences, venous samples could not be substituted for arterial samples. With both IDIF and PBIF, depressed patients had a 20% reduction in [(11)C](R)-rolipram binding as compared to control (two-way ANOVA: p=0.008 and 0.005, respectively). These results were almost equivalent to those obtained using 23 arterial samples. Although some arterial samples are still necessary, both PBIF and IDIF are accurate and precise alternatives to full arterial input function for [(11)C](R)-rolipram PET studies. Both techniques give accurate results with low variability, even for clinically different groups of subjects and those with very differently shaped input functions. Published by Elsevier Inc.
How sensitive are estimates of carbon fixation in agricultural models to input data?
2012-01-01
Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison. PMID:22296931
A neural circuit mechanism for regulating vocal variability during song learning in zebra finches.
Garst-Orozco, Jonathan; Babadi, Baktash; Ölveczky, Bence P
2014-12-15
Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.
Bottom-up and Top-down Input Augment the Variability of Cortical Neurons
Nassi, Jonathan J.; Kreiman, Gabriel; Born, Richard T.
2016-01-01
SUMMARY Neurons in the cerebral cortex respond inconsistently to a repeated sensory stimulus, yet they underlie our stable sensory experiences. Although the nature of this variability is unknown, its ubiquity has encouraged the general view that each cell produces random spike patterns that noisily represent its response rate. In contrast, here we show that reversibly inactivating distant sources of either bottom-up or top-down input to cortical visual areas in the alert primate reduces both the spike train irregularity and the trial-to-trial variability of single neurons. A simple model in which a fraction of the pre-synaptic input is silenced can reproduce this reduction in variability, provided that there exist temporal correlations primarily within, but not between, excitatory and inhibitory input pools. A large component of the variability of cortical neurons may therefore arise from synchronous input produced by signals arriving from multiple sources. PMID:27427459
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Atmospheric Transport and Input of Iron to the Southern Ocean
NASA Astrophysics Data System (ADS)
Tindale, N. W.
2002-12-01
While Australia is not generally considered to be a major source of mineral dust to the atmosphere, at least compared to Asian and African desert regions, it does appear to be the main source of mineral material to the Southern Ocean region south of Australia and New Zealand. In common with most of the greater Southern Ocean, this region contains high nitrate, low chlorophyll (HNLC) waters. Recent open ocean iron enrichment experiments in this region have demonstrated that phytoplankton growth and biomass are limited by iron availability. However the flux of atmospheric iron to this open ocean region is poorly known with very few direct measurements of mineral aerosol levels and input. Using mineral aerosol samples collected on Macquarie Island and at Cape Grim, together with other chemical data, air mass trajectories and satellite data, the spatial and temporal variability of aerosol iron transport and input to the Southern Ocean region south of Australia is estimated.
Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.
2008-12-01
The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.
Gravity dependence of subjective visual vertical variability.
Tarnutzer, A A; Bockisch, C; Straumann, D; Olasagasti, I
2009-09-01
The brain integrates sensory input from the otolith organs, the semicircular canals, and the somatosensory and visual systems to determine self-orientation relative to gravity. Only the otoliths directly sense the gravito-inertial force vector and therefore provide the major input for perceiving static head-roll relative to gravity, as measured by the subjective visual vertical (SVV). Intraindividual SVV variability increases with head roll, which suggests that the effectiveness of the otolith signal is roll-angle dependent. We asked whether SVV variability reflects the spatial distribution of the otolithic sensors and the otolith-derived acceleration estimate. Subjects were placed in different roll orientations (0-360 degrees, 15 degrees steps) and asked to align an arrow with perceived vertical. Variability was minimal in upright, increased with head-roll peaking around 120-135 degrees, and decreased to intermediate values at 180 degrees. Otolith-dependent variability was modeled by taking into consideration the nonuniform distribution of the otolith afferents and their nonlinear firing rate. The otolith-derived estimate was combined with an internal bias shifting the estimated gravity-vector toward the body-longitudinal. Assuming an efficient otolith estimator at all roll angles, peak variability of the model matched our data; however, modeled variability in upside-down and upright positions was very similar, which is at odds with our findings. By decreasing the effectiveness of the otolith estimator with increasing roll, simulated variability matched our experimental findings better. We suggest that modulations of SVV precision in the roll plane are related to the properties of the otolith sensors and to central computational mechanisms that are not optimally tuned for roll-angles distant from upright.
Nitrate in groundwater of the United States, 1991-2003
Burow, Karen R.; Nolan, Bernard T.; Rupert, Michael G.; Dubrovsky, Neil M.
2010-01-01
An assessment of nitrate concentrations in groundwater in the United States indicates that concentrations are highest in shallow, oxic groundwater beneath areas with high N inputs. During 1991-2003, 5101 wells were sampled in 51 study areas throughout the U.S. as part of the U.S. Geological Survey National Water-Quality Assessment (NAWQA) program. The well networks reflect the existing used resource represented by domestic wells in major aquifers (major aquifer studies), and recently recharged groundwater beneath dominant land-surface activities (land-use studies). Nitrate concentrations were highest in shallow groundwater beneath agricultural land use in areas with well-drained soils and oxic geochemical conditions. Nitrate concentrations were lowest in deep groundwater where groundwater is reduced, or where groundwater is older and hence concentrations reflect historically low N application rates. Classification and regression tree analysis was used to identify the relative importance of N inputs, biogeochemical processes, and physical aquifer properties in explaining nitrate concentrations in groundwater. Factors ranked by reduction in sum of squares indicate that dissolved iron concentrations explained most of the variation in groundwater nitrate concentration, followed by manganese, calcium, farm N fertilizer inputs, percent well-drained soils, and dissolved oxygen. Overall, nitrate concentrations in groundwater are most significantly affected by redox conditions, followed by nonpoint-source N inputs. Other water-quality indicators and physical variables had a secondary influence on nitrate concentrations.
NASA Astrophysics Data System (ADS)
Genereux, David P.; Jordan, Michael
2006-04-01
This paper reviews work related to interbasin groundwater flow (naturally occurring groundwater flow beneath watershed topographic divides) into lowland rainforest watersheds at La Selva Biological Station in Costa Rica. Chemical mixing calculations (based on dissolved chloride) have shown that up to half the water in some streams and up to 84% of the water in some riparian seeps and wells is due to high-solute interbasin groundwater flow (IGF). The contribution is even greater for major ions; IGF accounts for well over 90% of the major ions at these sites. Proportions are highly variable both among watersheds and with elevation within the same watershed (there is greater influence of IGF at lower elevations). The large proportion of IGF found in water in some riparian wetlands suggests that IGF is largely responsible for maintaining these wetlands. δ 18O data support the conclusions from the major ion data. Annual water and major ion budgets for two adjacent watersheds, one affected by IGF and the other not, showed that IGF accounted for two-thirds of the water input and 92-99% of the major ion input (depending on the major ion in question) to the former watershed. The large (in some cases, dominating) influence of IGF on watershed surface water quantity and quality has important implications for stream ecology and watershed management in this lowland rainforest. Because of its high phosphorus content, IGF increases a variety of ecological variables (algal growth rates, leaf decay rate, fungal biomass, invertebrate biomass, microbial respiration rates on leaves) in streams at La Selva. The significant rates of IGF at La Selva also suggest the importance of regional (as opposed to small-scale local) water resource planning that links lowland watersheds with regional groundwater. IGF is a relatively unexplored and potentially critical factor in the conservation of lowland rainforest.
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
Input-variable sensitivity assessment for sediment transport relations
NASA Astrophysics Data System (ADS)
Fernández, Roberto; Garcia, Marcelo H.
2017-09-01
A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.
A Multifactor Approach to Research in Instructional Technology.
ERIC Educational Resources Information Center
Ragan, Tillman J.
In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
Benoy, Glenn A.; Jenkinson, R. Wayne; Robertson, Dale M.; Saad, David A.
2016-01-01
Excessive phosphorus (TP) and nitrogen (TN) inputs from the Red–Assiniboine River Basin (RARB) have been linked to eutrophication of Lake Winnipeg; therefore, it is important for the management of water resources to understand where and from what sources these nutrients originate. The RARB straddles the Canada–United States border and includes portions of two provinces and three states. This study represents the first binationally focused application of SPAtially Referenced Regressions on Watershed attributes (SPARROW) models to estimate loads and sources of TP and TN by jurisdiction and basin at multiple spatial scales. Major hurdles overcome to develop these models included: (1) harmonization of geospatial data sets, particularly construction of a contiguous stream network; and (2) use of novel calibration steps to accommodate limitations in spatial variability across the model extent and in the number of calibration sites. Using nutrient inputs for a 2002 base year, a RARB TP SPARROW model was calibrated that included inputs from agriculture, forests and wetlands, wastewater treatment plants (WWTPs) and stream channels, and a TN model was calibrated that included inputs from agriculture, WWTPs and atmospheric deposition. At the RARB outlet, downstream from Winnipeg, Manitoba, the majority of the delivered TP and TN came from the Red River Basin (90%), followed by the Upper Assiniboine River and Souris River basins. Agriculture was the single most important TP and TN source for each major basin, province and state. In general, stream channels (historically deposited nutrients and from bank erosion) were the second most important source of TP. Performance metrics for the RARB SPARROW model are similarly robust compared to other, larger US SPARROW models making it a potentially useful tool to address questions of where nutrients originate and their relative contributions to loads delivered to Lake Winnipeg.
Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices
NASA Technical Reports Server (NTRS)
Emmen, R. D.; Tischler, M. B.
1982-01-01
This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.
Production Function Geometry with "Knightian" Total Product
ERIC Educational Resources Information Center
Truett, Dale B.; Truett, Lila J.
2007-01-01
Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.
Structural tailoring of engine blades (STAEBL)
NASA Technical Reports Server (NTRS)
Platt, C. E.; Pratt, T. K.; Brown, K. W.
1982-01-01
A mathematical optimization procedure was developed for the structural tailoring of engine blades and was used to structurally tailor two engine fan blades constructed of composite materials without midspan shrouds. The first was a solid blade made from superhybrid composites, and the second was a hollow blade with metal matrix composite inlays. Three major computerized functions were needed to complete the procedure: approximate analysis with the established input variables, optimization of an objective function, and refined analysis for design verification.
NASA Astrophysics Data System (ADS)
Mishra, H.; Karmakar, S.; Kumar, R.
2016-12-01
Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.
Nonequilibrium air radiation (Nequair) program: User's manual
NASA Technical Reports Server (NTRS)
Park, C.
1985-01-01
A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Harmonize input selection for sediment transport prediction
NASA Astrophysics Data System (ADS)
Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed
2017-09-01
In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.
Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya
2013-01-01
Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.
Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.
Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt
2008-07-01
MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.
Spatial variability of metals in the inter-tidal sediments of the Medway Estuary, Kent, UK.
Spencer, Kate L
2002-09-01
Concentrations of major and trace metals were determined in eight sediment cores collected from the inter-tidal zone of the Medway Estuary, Kent, UK. Metal associations and potential sources have been investigated using principal component analysis. These data provide the first detailed geochemical survey of recent sediments in the Medway Estuary. Metal concentrations in surface sediments lie in the mid to lower range for UK estuarine sediments indicating that the Medway receives low but appreciable contaminant inputs. Vertical metal distributions reveal variable redox zonation across the estuary and historically elevated anthropogenic inputs. Peak concentrations of Cu, Pb and Zn can be traced laterally across the estuary and their positions indicate periods of past erosion and/or non-deposition. However, low rates of sediment accumulation do not allow these sub surface maxima to be used as accurate geochemical marker horizons. The salt marshes and inter-tidal mud flats in the Medway Estuary are experiencing erosion, however the erosion of historically contaminated sediments is unlikely to re-release significant amounts of heavy metals to the estuarine system.
19 CFR 351.407 - Calculation of constructed value and cost of production.
Code of Federal Regulations, 2014 CFR
2014-04-01
... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...
19 CFR 351.407 - Calculation of constructed value and cost of production.
Code of Federal Regulations, 2011 CFR
2011-04-01
... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...
19 CFR 351.407 - Calculation of constructed value and cost of production.
Code of Federal Regulations, 2012 CFR
2012-04-01
... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...
19 CFR 351.407 - Calculation of constructed value and cost of production.
Code of Federal Regulations, 2013 CFR
2013-04-01
... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...
19 CFR 351.407 - Calculation of constructed value and cost of production.
Code of Federal Regulations, 2010 CFR
2010-04-01
... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...
Using Natural Language to Enhance Mission Effectiveness
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Meszaros, Erica
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for professional-related activities. The driving function of this research is allowing a non-UAV pilot, an operator, to define and manage a mission. This paper describes the preliminary usability measures of an interface that allows an operator to define the mission using speech to make inputs. An experiment was conducted to begin to enumerate the efficacy and user acceptance of using voice commands to define a multi-UAV mission and to provide high-level vehicle control commands such as "takeoff." The primary independent variable was input type - voice or mouse. The primary dependent variables consisted of the correctness of the mission parameter inputs and the time needed to make all inputs. Other dependent variables included NASA-TLX workload ratings and subjective ratings on a final questionnaire. The experiment required each subject to fill in an online form that contained comparable required information that would be needed for a package dispatcher to deliver packages. For each run, subjects typed in a simple numeric code for the package code. They then defined the initial starting position, the delivery location, and the return location using either pull-down menus or voice input. Voice input was accomplished using CMU Sphinx4-5prealpha for speech recognition. They then inputted the length of the package. These were the option fields. The subject had the system "Calculate Trajectory" and then "Takeoff" once the trajectory was calculated. Later, the subject used "Land" to finish the run. After the voice and mouse input blocked runs, subjects completed a NASA-TLX. At the conclusion of all runs, subjects completed a questionnaire asking them about their experience in inputting the mission parameters, and starting and stopping the mission using mouse and voice input. In general, the usability of voice commands is acceptable. With a relatively well-defined and simple vocabulary, the operator can input the vast majority of the mission parameters using simple, intuitive voice commands. However, voice input may be more applicable to initial mission specification rather than for critical commands such as the need to land immediately due to time and feedback constraints. It would also be convenient to retrieve relevant mission information using voice input. Therefore, further on-going research is looking at using intent from operator utterances to provide the relevant mission information to the operator. The information displayed will be inferred from the operator's utterances just before key phrases are spoken. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables us to predict the operator's intent and supply the operator's desired information to the interface. This paper also describes preliminary investigations into the generation of the semantic space of UAV operation and the success at providing information to the interface based on the operator's utterances.
Essential Role of the m2R-RGS6-IKACh Pathway in Controlling Intrinsic Heart Rate Variability
Posokhova, Ekaterina; Ng, David; Opel, Aaisha; Masuho, Ikuo; Tinker, Andrew; Biesecker, Leslie G.; Wickman, Kevin; Martemyanov, Kirill A.
2013-01-01
Normal heart function requires generation of a regular rhythm by sinoatrial pacemaker cells and the alteration of this spontaneous heart rate by the autonomic input to match physiological demand. However, the molecular mechanisms that ensure consistent periodicity of cardiac contractions and fine tuning of this process by autonomic system are not completely understood. Here we examined the contribution of the m2R-IKACh intracellular signaling pathway, which mediates the negative chronotropic effect of parasympathetic stimulation, to the regulation of the cardiac pacemaking rhythm. Using isolated heart preparations and single-cell recordings we show that the m2R-IKACh signaling pathway controls the excitability and firing pattern of the sinoatrial cardiomyocytes and determines variability of cardiac rhythm in a manner independent from the autonomic input. Ablation of the major regulator of this pathway, Rgs6, in mice results in irregular cardiac rhythmicity and increases susceptibility to atrial fibrillation. We further identify several human subjects with variants in the RGS6 gene and show that the loss of function in RGS6 correlates with increased heart rate variability. These findings identify the essential role of the m2R-IKACh signaling pathway in the regulation of cardiac sinus rhythm and implicate RGS6 in arrhythmia pathogenesis. PMID:24204714
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Climate effects on phytoplankton floral composition in Chesapeake Bay
NASA Astrophysics Data System (ADS)
Harding, L. W.; Adolf, J. E.; Mallonee, M. E.; Miller, W. D.; Gallegos, C. L.; Perry, E. S.; Johnson, J. M.; Sellner, K. G.; Paerl, H. W.
2015-09-01
Long-term data on floral composition of phytoplankton are presented to document seasonal and inter-annual variability in Chesapeake Bay related to climate effects on hydrology. Source data consist of the abundances of major taxonomic groups of phytoplankton derived from algal photopigments (1995-2004) and cell counts (1985-2007). Algal photopigments were measured by high-performance liquid chromatography (HPLC) and analyzed using the software CHEMTAX to determine the proportions of chlorophyll-a (chl-a) in major taxonomic groups. Cell counts determined microscopically provided species identifications, enumeration, and dimensions used to obtain proportions of cell volume (CV), plasma volume (PV), and carbon (C) in the same taxonomic groups. We drew upon these two independent data sets to take advantage of the unique strengths of each method, using comparable quantitative measures to express floral composition for the main stem bay. Spatial and temporal variability of floral composition was quantified using data aggregated by season, year, and salinity zone. Both time-series were sufficiently long to encompass the drought-flood cycle with commensurate effects on inputs of freshwater and solutes. Diatoms emerged as the predominant taxonomic group, with significant contributions by dinoflagellates, cryptophytes, and cyanobacteria, depending on salinity zone and season. Our analyses revealed increased abundance of diatoms in wet years compared to long-term average (LTA) or dry years. Results are presented in the context of long-term nutrient over-enrichment of the bay, punctuated by inter-annual variability of freshwater flow that strongly affects nutrient loading, chl-a, and floral composition. Statistical analyses generated flow-adjusted diatom abundance and showed significant trends late in the time series, suggesting current and future decreases of nutrient inputs may lead to a reduction of the proportion of biomass comprised by diatoms in an increasingly diverse flora.
Peer Educators and Close Friends as Predictors of Male College Students' Willingness to Prevent Rape
ERIC Educational Resources Information Center
Stein, Jerrold L.
2007-01-01
Astin's (1977, 1991, 1993) input-environment-outcome (I-E-O) model provided a conceptual framework for this study which measured 156 male college students' willingness to prevent rape (outcome variable). Predictor variables included personal attitudes (input variable), perceptions of close friends' attitudes toward rape and rape prevention…
The Effects of a Change in the Variability of Irrigation Water
NASA Astrophysics Data System (ADS)
Lyon, Kenneth S.
1983-10-01
This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Three-input majority logic gate and multiple input logic circuit based on DNA strand displacement.
Li, Wei; Yang, Yang; Yan, Hao; Liu, Yan
2013-06-12
In biomolecular programming, the properties of biomolecules such as proteins and nucleic acids are harnessed for computational purposes. The field has gained considerable attention due to the possibility of exploiting the massive parallelism that is inherent in natural systems to solve computational problems. DNA has already been used to build complex molecular circuits, where the basic building blocks are logic gates that produce single outputs from one or more logical inputs. We designed and experimentally realized a three-input majority gate based on DNA strand displacement. One of the key features of a three-input majority gate is that the three inputs have equal priority, and the output will be true if any of the two inputs are true. Our design consists of a central, circular DNA strand with three unique domains between which are identical joint sequences. Before inputs are introduced to the system, each domain and half of each joint is protected by one complementary ssDNA that displays a toehold for subsequent displacement by the corresponding input. With this design the relationship between any two domains is analogous to the relationship between inputs in a majority gate. Displacing two or more of the protection strands will expose at least one complete joint and return a true output; displacing none or only one of the protection strands will not expose a complete joint and will return a false output. Further, we designed and realized a complex five-input logic gate based on the majority gate described here. By controlling two of the five inputs the complex gate can realize every combination of OR and AND gates of the other three inputs.
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
European nitrogen policies, nitrate in rivers and the use of the INCA model
NASA Astrophysics Data System (ADS)
Skeffington, R.
This paper is concerned with nitrogen inputs to European catchments, how they are likely to change in future, and the implications for the INCA model. National N budgets show that the fifteen countries currently in the European Union (the EU-15 countries) probably have positive N balances - that is, N inputs exceed outputs. The major sources are atmospheric deposition, fertilisers and animal feed, the relative importance of which varies between countries. The magnitude of the fluxes which determine the transport and retention of N in catchments is also very variable in both space and time. The most important of these fluxes are parameterised directly or indirectly in the INCA Model, though it is doubtful whether the present version of the model is flexible enough to encompass short-term (daily) variations in inputs or longer-term (decadal) changes in soil parameters. As an aid to predicting future changes in deposition, international legislation relating to atmospheric N inputs and nitrate in rivers is reviewed briefly. Atmospheric N deposition and fertiliser use are likely to decrease over the next 10 years, but probably not sufficiently to balance national N budgets.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
NASA Astrophysics Data System (ADS)
Levi, L.; Cvetkovic, V.; Destouni, G.
2015-12-01
This study compiles estimates of waterborne nutrient concentrations and loads in the Sava River Catchment (SRC). Based on this compilation, we investigate hotspots of nutrient inputs and retention along the river, as well as concentration and load correlations with river discharge and various human drivers of excess nutrient inputs to the SRC. For cross-regional assessment and possible generalization, we also compare corresponding results between the SRC and the Baltic Sea Drainage Basin (BSDB). In the SRC, one small incremental subcatchment, which is located just downstream of Zagreb and has the highest population density among the SRC subcatchments, is identified as a major hotspot for net loading (input minus retention) of both total nitrogen (TN) and total phosphorus (TP) to the river and through it to downstream areas of the SRC. The other SRC subcatchments exhibit relatively similar characteristics with smaller net nutrient loading. The annual loads of both TN and TP along the Sava River exhibit dominant temporal variability with considerably higher correlation with annual river discharge (R2 = 0.51 and 0.28, respectively) than that of annual average nutrient concentrations (R2 = 0.0 versus discharge for both TN and TP). Nutrient concentrations exhibit instead dominant spatial variability with relatively high correlation with population density among the SRC subcatchments (R2=0.43-0.64). These SRC correlation characteristics compare well with corresponding ones for the BSDB, even though the two regions are quite different in their hydroclimatic, agricultural and wastewater treatment conditions. Such cross-regional consistency in dominant variability type and explanatory catchment characteristics may be a useful generalization basis, worthy of further investigation, for at least first-order estimation of nutrient concentration and load conditions in less data-rich regions.
Correction of I/Q channel errors without calibration
Doerry, Armin W.; Tise, Bertice L.
2002-01-01
A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.
Tracking contaminants down the Mississippi
Swarzenski, P.; Campbell, P.
2004-01-01
The Mississippi River and its last major downstream distributary, the Atchafalaya River, provide approximately 90 percent of the freshwater input to the Gulf of Mexico. Analyses of sediment cores using organic and inorganic tracers as well as bethic foraminifera appear to provide a reliable record of the historic variability of hypoxia in the northern Gulf of Mexico over the past few centuries. Natural variability in hypoxic events may be driven largely by flooding cycles of El Nin??o/La Nin??a prior to recent increases in nutrient loading. Specifically, large floods in 1979, 1983, 1993 and 1998, compounded with the widespread use of fertilizers, also appear at least partially responsible for the recent (post-1980) dramatic increase of hypoxic events in the Mississippi Bight.
NASA Astrophysics Data System (ADS)
Ruggieri, Nicoletta; Kaiser, Jérôme; Arz, Helge W.; Hefter, Jens; Siegel, Herbert; Mollenhauer, Gesine; Lamy, Frank
2014-05-01
A series of molecular organic markers were determined in surface sediments from the Gulf of Genoa (Ligurian Sea) in order to evaluate their potential for palaeo-environmental reconstructions. The interest for the Gulf of Genoa lies in its contrasting coastal and central areas in terms of terrestrial input, oligotrophy, primary production and surface temperature gradient. Moreover, the Gulf of Genoa contains a large potential for climate reconstruction as it is one of the four major Mediterranean centres for cyclogenesis and the ultra high sedimentation rates on the shelf make this area suitable for high resolution environmental reconstruction. Initial results from sediment cores in the coastal area indeed reveal the potential for Holocene environmental reconstruction on up to decadal timescales (see Poster "Reconstruction of late Holocene flooding events in the Gulf of Genoa, Ligurian Sea" by Lamy et al.). During R/V Poseidon cruise P413 (May 2011), ca. 60 sediment cores were taken along the Ligurian shelf, continental slope, and in the basin between off Livorno and the French border. Results based on surface sediments suggest that some biomarker-based proxies are well-suited to reconstruct sea surface temperature (SST), the input of terrestrial organic material (TOM), and marine primary productivity (PP). The estimated UK'37 SST reflects very closely the autumnal mean satellite-based SST distribution, while TEXH86 SSTs correspond to summer SST at offshore sites and to winter SST for the nearshore sites. Using both SST proxies together may thus allow reconstructing past seasonality changes. Proxies for TOM input (terrestrial n-alkane and n-alkanol concentrations, BIT index) have higher values close to the major river mouths and decrease offshore suggesting that these may be used as proxy for the variability in TOM input by runoff. Interestingly, high n-alkane average chain length in the most offshore sites may result from aeolian input from northern Africa. Finally, high concentrations of crenarchaeol and isoprenoid GDGTs in the open basin illustrate the preference of Thaumarchaeota for oligotrophic waters. This study represents a major prerequisite for the future application of lipid biomarkers on sediment cores from the Gulf of Genoa.
Context effects on second-language learning of tonal contrasts.
Chang, Charles B; Bowles, Anita R
2015-12-01
Studies of lexical tone learning generally focus on monosyllabic contexts, while reports of phonetic learning benefits associated with input variability are based largely on experienced learners. This study trained inexperienced learners on Mandarin tonal contrasts to test two hypotheses regarding the influence of context and variability on tone learning. The first hypothesis was that increased phonetic variability of tones in disyllabic contexts makes initial tone learning more challenging in disyllabic than monosyllabic words. The second hypothesis was that the learnability of a given tone varies across contexts due to differences in tonal variability. Results of a word learning experiment supported both hypotheses: tones were acquired less successfully in disyllables than in monosyllables, and the relative difficulty of disyllables was closely related to contextual tonal variability. These results indicate limited relevance of monosyllable-based data on Mandarin learning for the disyllabic majority of the Mandarin lexicon. Furthermore, in the short term, variability can diminish learning; its effects are not necessarily beneficial but dependent on acquisition stage and other learner characteristics. These findings thus highlight the importance of considering contextual variability and the interaction between variability and type of learner in the design, interpretation, and application of research on phonetic learning.
Delpierre, Nicolas; Berveiller, Daniel; Granda, Elena; Dufrêne, Eric
2016-04-01
Although the analysis of flux data has increased our understanding of the interannual variability of carbon inputs into forest ecosystems, we still know little about the determinants of wood growth. Here, we aimed to identify which drivers control the interannual variability of wood growth in a mesic temperate deciduous forest. We analysed a 9-yr time series of carbon fluxes and aboveground wood growth (AWG), reconstructed at a weekly time-scale through the combination of dendrometer and wood density data. Carbon inputs and AWG anomalies appeared to be uncorrelated from the seasonal to interannual scales. More than 90% of the interannual variability of AWG was explained by a combination of the growth intensity during a first 'critical period' of the wood growing season, occurring close to the seasonal maximum, and the timing of the first summer growth halt. Both atmospheric and soil water stress exerted a strong control on the interannual variability of AWG at the study site, despite its mesic conditions, whilst not affecting carbon inputs. Carbon sink activity, not carbon inputs, determined the interannual variations in wood growth at the study site. Our results provide a functional understanding of the dependence of radial growth on precipitation observed in dendrological studies. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Input Variability Facilitates Unguided Subcategory Learning in Adults
Eidsvåg, Sunniva Sørhus; Austad, Margit; Asbjørnsen, Arve E.
2015-01-01
Purpose This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Results Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. Conclusions The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition. PMID:25680081
Input Variability Facilitates Unguided Subcategory Learning in Adults.
Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E
2015-06-01
This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Ad hoc committee on global climate issues: Annual report
Gerhard, L.C.; Hanson, B.M.B.
2000-01-01
The AAPG Ad Hoc Committee on Global Climate Issues has studied the supposition of human-induced climate change since the committee's inception in January 1998. This paper details the progress and findings of the committee through June 1999. At that time there had been essentially no geologic input into the global climate change debate. The following statements reflect the current state of climate knowledge from the geologic perspective as interpreted by the majority of the committee membership. The committee recognizes that new data could change its conclusions. The earth's climate is constantly changing owing to natural variability in earth processes. Natural climate variability over recent geological time is greater than reasonable estimates of potential human-induced greenhouse gas changes. Because no tool is available to test the supposition of human-induced climate change and the range of natural variability is so great, there is no discernible human influence on global climate at this time.
NASA Technical Reports Server (NTRS)
Adler, R. F.; Gu, G.; Curtis, S.; Huffman, G. J.
2004-01-01
The Global Precipitation Climatology Project (GPCP) 25-year precipitation data set is used as a basis to evaluate the mean state, variability and trends (or inter-decadal changes) of global and regional scales of precipitation. The uncertainties of these characteristics of the data set are evaluated by examination of other, parallel data sets and examination of shorter periods with higher quality data (e.g., TRMM). The global and regional means are assessed for uncertainty by comparing with other satellite and gauge data sets, both globally and regionally. The GPCP global mean of 2.6 mdday is divided into values of ocean and land and major latitude bands (Tropics, mid-latitudes, etc.). Seasonal variations globally and by region are shown and uncertainties estimated. The variability of precipitation year-to-year is shown to be related to ENS0 variations and volcanoes and is evaluated in relation to the overall lack of a significant global trend. The GPCP data set necessarily has a heterogeneous time series of input data sources, so part of the assessment described above is to test the initial results for potential influence by major data boundaries in the record.
User's Guide to Handlens - A Computer Program that Calculates the Chemistry of Minerals in Mixtures
Eberl, D.D.
2008-01-01
HandLens is a computer program, written in Excel macro language, that calculates the chemistry of minerals in mineral mixtures (for example, in rocks, soils and sediments) for related samples from inputs of quantitative mineralogy and chemistry. For best results, the related samples should contain minerals having the same chemical compositions; that is, the samples should differ only in the proportions of minerals present. This manual describes how to use the program, discusses the theory behind its operation, and presents test results of the program's accuracy. Required input for HandLens includes quantitative mineralogical data, obtained, for example, by RockJock analysis of X-ray diffraction (XRD) patterns, and quantitative chemical data, obtained, for example, by X-ray florescence (XRF) analysis of the same samples. Other quantitative data, such as sample depth, temperature, surface area, also can be entered. The minerals present in the samples are selected from a list, and the program is started. The results of the calculation include: (1) a table of linear coefficients of determination (r2's) which relate pairs of input data (for example, Si versus quartz weight percents); (2) a utility for plotting all input data, either as pairs of variables, or as sums of up to eight variables; (3) a table that presents the calculated chemical formulae for minerals in the samples; (4) a table that lists the calculated concentrations of major, minor, and trace elements in the various minerals; and (5) a table that presents chemical formulae for the minerals that have been corrected for possible systematic errors in the mineralogical and/or chemical analyses. In addition, the program contains a method for testing the assumption of constant chemistry of the minerals within a sample set.
Speed control system for an access gate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bzorgi, Fariborz M
2012-03-20
An access control apparatus for an access gate. The access gate typically has a rotator that is configured to rotate around a rotator axis at a first variable speed in a forward direction. The access control apparatus may include a transmission that typically has an input element that is operatively connected to the rotator. The input element is generally configured to rotate at an input speed that is proportional to the first variable speed. The transmission typically also has an output element that has an output speed that is higher than the input speed. The input element and the outputmore » element may rotate around a common transmission axis. A retardation mechanism may be employed. The retardation mechanism is typically configured to rotate around a retardation mechanism axis. Generally the retardation mechanism is operatively connected to the output element of the transmission and is configured to retard motion of the access gate in the forward direction when the first variable speed is above a control-limit speed. In many embodiments the transmission axis and the retardation mechanism axis are substantially co-axial. Some embodiments include a freewheel/catch mechanism that has an input connection that is operatively connected to the rotator. The input connection may be configured to engage an output connection when the rotator is rotated at the first variable speed in a forward direction and configured for substantially unrestricted rotation when the rotator is rotated in a reverse direction opposite the forward direction. The input element of the transmission is typically operatively connected to the output connection of the freewheel/catch mechanism.« less
Ryberg, Karen R.; Blomquist, Joel; Sprague, Lori A.; Sekellick, Andrew J.; Keisman, Jennifer
2018-01-01
Causal attribution of changes in water quality often consists of correlation, qualitative reasoning, listing references to the work of others, or speculation. To better support statements of attribution for water-quality trends, structural equation modeling was used to model the causal factors of total phosphorus loads in the Chesapeake Bay watershed. By transforming, scaling, and standardizing variables, grouping similar sites, grouping some causal factors into latent variable models, and using methods that correct for assumption violations, we developed a structural equation model to show how causal factors interact to produce total phosphorus loads. Climate (in the form of annual total precipitation and the Palmer Hydrologic Drought Index) and anthropogenic inputs are the major drivers of total phosphorus load in the Chesapeake Bay watershed. Increasing runoff due to natural climate variability is offsetting purposeful management actions that are otherwise decreasing phosphorus loading; consequently, management actions may need to be reexamined to achieve target reductions in the face of climate variability.
2016-11-01
DEFENSE INTELLIGENCE Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon...States Government Accountability Office Highlights of GAO-17-10, a report to congressional committees November 2016 DEFENSE INTELLIGENCE ...Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon Systems What GAO Found The Department of Defense (DOD
Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S
2016-08-31
Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.
Sheibley, Rich W.; Josberger, Edward G.; Chickadel, Chris
2010-01-01
The input of freshwater and associated nutrients into Lynch Cove and lower Hood Canal (fig. 1) from sources such as groundwater seeps, small streams, and ephemeral creeks may play a major role in the nutrient loading and hydrodynamics of this low dissolved-oxygen (hypoxic) system. These disbursed sources exhibit a high degree of spatial variability. However, few in-situ measurements of groundwater seepage rates and nutrient concentrations are available and thus may not represent adequately the large spatial variability of groundwater discharge in the area. As a result, our understanding of these processes and their effect on hypoxic conditions in Hood Canal is limited. To determine the spatial variability and relative intensity of these sources, the U.S. Geological Survey Washington Water Science Center collaborated with the University of Washington Applied Physics Laboratory to obtain thermal infrared (TIR) images of the nearshore and intertidal regions of Lynch Cove at or near low tide. In the summer, cool freshwater discharges from seeps and streams, flows across the exposed, sun-warmed beach, and out on the warm surface of the marine water. These temperature differences are readily apparent in aerial thermal infrared imagery that we acquired during the summers of 2008 and 2009. When combined with co-incident video camera images, these temperature differences allow identification of the location, the type, and the relative intensity of the sources.
Modeling pCO2 variability in the Gulf of Mexico
NASA Astrophysics Data System (ADS)
Xue, Z.; He, R.; Fennel, K.; Cai, W.-J.; Lohrenz, S.; Huang, W.-J.; Tian, H.
2014-08-01
A three-dimensional coupled physical-biogeochemical model was used to simulate and examine temporal and spatial variability of surface pCO2 in the Gulf of Mexico (GoM). The model is driven by realistic atmospheric forcing, open boundary conditions from a data-assimilative global ocean circulation model, and observed freshwater and terrestrial nutrient and carbon input from major rivers. A seven-year model hindcast (2004-2010) was performed and was validated against in situ measurements. The model revealed clear seasonality in surface pCO2. Based on the multi-year mean of the model results, the GoM is an overall CO2 sink with a flux of 1.34 × 1012 mol C yr-1, which, together with the enormous fluvial carbon input, is balanced by the carbon export through the Loop Current. A sensitivity experiment was performed where all biological sources and sinks of carbon were disabled. In this simulation surface pCO2 was elevated by ~ 70 ppm, providing the evidence that biological uptake is a primary driver for the observed CO2 sink. The model also provided insights about factors influencing the spatial distribution of surface pCO2 and sources of uncertainty in the carbon budget.
Modeling pCO2 Variability in the Gulf of Mexico
NASA Astrophysics Data System (ADS)
Xue, Z. G.; He, R.; Fennel, K.; Cai, W. J.; Lohrenz, S. E.; Huang, W. J.; Tian, H.
2014-12-01
A three-dimensional coupled physical-biogeochemical model was used to simulate and examine temporal and spatial variability of surface pCO2 in the Gulf of Mexico (GoM). The model is driven by realistic atmospheric forcing, open boundary conditions from a data-assimilative global ocean circulation model, and observed freshwater and terrestrial nutrient and carbon input from major rivers. A seven-year model hindcast (2004-2010) was performed and was validated against in situ measurements. The model revealed clear seasonality in surface pCO2. Based on the multi-year mean of the model results, the GoM is an overall CO2 sink with a flux of 1.34 × 1012 mol C yr-1, which, together with the enormous fluvial carbon input, is balanced by the carbon export through the Loop Current. A sensitivity experiment was performed where all biological sources and sinks of carbon were disabled. In this simulation surface pCO2 was elevated by ~70 ppm, providing the evidence that biological uptake is a primary driver for the observed CO2 sink. The model also provided insights about factors influencing the spatial distribution of surface pCO2 and sources of uncertainty in the carbon budget.
NLEdit: A generic graphical user interface for Fortran programs
NASA Technical Reports Server (NTRS)
Curlett, Brian P.
1994-01-01
NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.
Computing Shapes Of Cascade Diffuser Blades
NASA Technical Reports Server (NTRS)
Tran, Ken; Prueger, George H.
1993-01-01
Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.
Rodolfo, Inês; Pereira, Ana Marta; de Sá, Armando Brito
2017-01-01
Background Personal health records (PHRs) are increasingly being deployed worldwide, but their rates of adoption by patients vary widely across countries and health systems. Five main categories of adopters are usually considered when evaluating the diffusion of innovations: innovators, early adopters, early majority, late majority, and laggards. Objective We aimed to evaluate adoption of the Portuguese PHR 3 months after its release, as well as characterize the individuals who registered and used the system during that period (the innovators). Methods We conducted a cross-sectional study. Users and nonusers were defined based on their input, or not, of health-related information into the PHR. Users of the PHR were compared with nonusers regarding demographic and clinical variables. Users were further characterized according to their intensity of information input: single input (one single piece of health-related information recorded) and multiple inputs. Multivariate logistic regression was used to model the probability of being in the multiple inputs group. ArcGis (ESRI, Redlands, CA, USA) was used to create maps of the proportion of PHR registrations by region and district. Results The number of registered individuals was 109,619 (66,408/109,619, 60.58% women; mean age: 44.7 years, standard deviation [SD] 18.1 years). The highest proportion of registrations was observed for those aged between 30 and 39 years (25,810/109,619, 23.55%). Furthermore, 16.88% (18,504/109,619) of registered individuals were considered users and 83.12% (91,115/109,619) nonusers. Among PHR users, 32.18% (5955/18,504) engaged in single input and 67.82% (12,549/18,504) in multiple inputs. Younger individuals and male users had higher odds of engaging in multiple inputs (odds ratio for male individuals 1.32, CI 1.19-1.48). Geographic analysis revealed higher proportions of PHR adoption in urban centers when compared with rural noncoastal districts. Conclusions Approximately 1% of the country’s population registered during the first 3 months of the Portuguese PHR. Registered individuals were more frequently female aged between 30 and 39 years. There is evidence of a geographic gap in the adoption of the Portuguese PHR, with higher proportions of adopters in urban centers than in rural noncoastal districts. PMID:29021125
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic
NASA Astrophysics Data System (ADS)
Narendran, S.; Selvakumar, J.
2018-04-01
Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.
Nelson, Sarah J.; Webster, Katherine E.; Loftin, Cynthia S.; Weathers, Kathleen C.
2013-01-01
Major ion and mercury (Hg) inputs to terrestrial ecosystems include both wet and dry deposition (total deposition). Estimating total deposition to sensitive receptor sites is hampered by limited information regarding its spatial heterogeneity and seasonality. We used measurements of throughfall flux, which includes atmospheric inputs to forests and the net effects of canopy leaching or uptake, for ten major ions and Hg collected during 35 time periods in 1999–2005 at over 70 sites within Acadia National Park, Maine to (1) quantify coherence in temporal dynamics of seasonal throughfall deposition and (2) examine controls on these patterns at multiple scales. We quantified temporal coherence as the correlation between all possible site pairs for each solute on a seasonal basis. In the summer growing season and autumn, coherence among pairs of sites with similar vegetation was stronger than for site-pairs that differed in vegetation suggesting that interaction with the canopy and leaching of solutes differed in coniferous, deciduous, mixed, and shrub or open canopy sites. The spatial pattern in throughfall hydrologic inputs across Acadia National Park was more variable during the winter snow season, suggesting that snow re-distribution affects net hydrologic input, which consequently affects chemical flux. Sea-salt corrected calcium concentrations identified a shift in air mass sources from maritime in winter to the continental industrial corridor in summer. Our results suggest that the spatial pattern of throughfall hydrologic flux, dominant seasonal air mass source, and relationship with vegetation in winter differ from the spatial pattern of throughfall flux in these solutes in summer and autumn. The coherence approach applied here made clear the strong influence of spatial heterogeneity in throughfall hydrologic inputs and a maritime air mass source on winter patterns of throughfall flux. By contrast, vegetation type was the most important influence on throughfall chemical flux in summer and autumn.
Input Variability Facilitates Unguided Subcategory Learning in Adults
ERIC Educational Resources Information Center
Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E.
2015-01-01
Purpose: This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method: Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half…
Do downscaled general circulation models reliably simulate historical climatic conditions?
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2018-01-01
The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.
Effects of input uncertainty on cross-scale crop modeling
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.
NASA Astrophysics Data System (ADS)
Hofmann, H.; Cartwright, I.
2010-12-01
Surface water-groundwater interactions are an important part of the hydrological cycle from ecological and resource perspectives. The dynamics have implications for ecosystems, pollutant transport, and the quality and quantity of water supply for domestic use, agriculture and recreational purposes. Chemical tracers are a valuable tool for understanding the interaction of rivers and the surrounding groundwater. The Gippsland Basin is a significant agricultural area in Southeast Australia. Increasing population has resulted in increased demand of water resources for domestic and agricultural supply. Despite the fact that the Gippsland area receives substantial rainfall, irrigation is still necessary to maintain agricultural production during summer and drier years. The used water resources encompass mostly shallow groundwater and surface water (reservoirs and streams). The effect on the environment range from rising water levels and soil salinisation in the case of irrigation and falling water levels with subsequent necrotization of the vegetation and land subsidence in the case of communal and industrial water extraction. While the surface water components of the hydrological cycle are relatively well understood, groundwater has often been neglected. In particular, constraining the interaction between surface water and groundwater is required for sustainable water management. Gaining and loosing conditions in streams are subject to high temporal and spatial variability and hence, influence the amount of water accessible for agricultural purposes. Following a general assumption recharge to the aquifer occurs during the winter and spring month whereas the river receives water from the aquifer mainly during low flow (base flow) conditions in summer and autumn on a larger scale. Spatial variation, however, are a function of the hydraulic conductivity of the riverbed and the head differences between the aquifer and the river along the river banks. Infiltration and exfiltration rates from changing water levels in the river based on hydraulic models are often underestimated. The hydraulic models do not take into account the complexity of the system and are purely based on discharge figures. Radon (222Rn), stable isotopes and major ion chemistry were used to locate groundwater inputs to the Mitchell and Avon rivers. While stable isotopes and major ion chemistry are useful tracers to determine long-term variability, radon can be used to detect very localised groundwater discharge. Using hydrogeochemistry to locate and quantify groundwater discharge to rivers allows a more accurate assumption on the dynamics of the interaction between surface water and groundwater in the Gippsland area. Radon has been used in similar applications elsewhere. Input parameters for mass balance equations, however, were often approximated and averaged. Radioisotope concentrations in groundwater has been assessed from 20 bores and 5 soil profiles to deliver a more confidential groundwater input water radon concentration by assessing spatial variability and emanation potential of the above-mentioned elements.
Propagating waves can explain irregular neural dynamics.
Keane, Adam; Gong, Pulin
2015-01-28
Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Missing pulse detector for a variable frequency source
Ingram, Charles B.; Lawhorn, John H.
1979-01-01
A missing pulse detector is provided which has the capability of monitoring a varying frequency pulse source to detect the loss of a single pulse or total loss of signal from the source. A frequency-to-current converter is used to program the output pulse width of a variable period retriggerable one-shot to maintain a pulse width slightly longer than one-half the present monitored pulse period. The retriggerable one-shot is triggered at twice the input pulse rate by employing a frequency doubler circuit connected between the one-shot input and the variable frequency source being monitored. The one-shot remains in the triggered or unstable state under normal conditions even though the source period is varying. A loss of an input pulse or single period of a fluctuating signal input will cause the one-shot to revert to its stable state, changing the output signal level to indicate a missing pulse or signal.
Input and language development in bilingually developing children.
Hoff, Erika; Core, Cynthia
2013-11-01
Language skills in young bilingual children are highly varied as a result of the variability in their language experiences, making it difficult for speech-language pathologists to differentiate language disorder from language difference in bilingual children. Understanding the sources of variability in bilingual contexts and the resulting variability in children's skills will help improve language assessment practices by speech-language pathologists. In this article, we review literature on bilingual first language development for children under 5 years of age. We describe the rate of development in single and total language growth, we describe effects of quantity of input and quality of input on growth, and we describe effects of family composition on language input and language growth in bilingual children. We provide recommendations for language assessment of young bilingual children and consider implications for optimizing children's dual language development. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Kondapalli, S. P.
2017-12-01
In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.
Ventricular repolarization variability for hypoglycemia detection.
Ling, Steve; Nguyen, H T
2011-01-01
Hypoglycemia is the most acute and common complication of Type 1 diabetes and is a limiting factor in a glycemic management of diabetes. In this paper, two main contributions are presented; firstly, ventricular repolarization variabilities are introduced for hypoglycemia detection, and secondly, a swarm-based support vector machine (SVM) algorithm with the inputs of the repolarization variabilities is developed to detect hypoglycemia. By using the algorithm and including several repolarization variabilities as inputs, the best hypoglycemia detection performance is found with sensitivity and specificity of 82.14% and 60.19%, respectively.
Watershed nitrogen and phosphorus balance: The upper Potomac River basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaworski, N.A.; Groffman, P.M.; Keller, A.A.
1992-01-01
Nitrogen and phosphorus mass balances were estimated for the portion of the Potomac River basin watershed located above Washington, D.C. The total nitrogen (N) balance included seven input source terms, six sinks, and one 'change-in-storage' term, but was simplified to five input terms and three output terms. The phosphorus (P) baance had four input and three output terms. The estimated balances are based on watershed data from seven information sources. Major sources of nitrogen are animal waste and atmospheric deposition. The major sources of phosphorus are animal waste and fertilizer. The major sink for nitrogen is combined denitrification, volatilization, andmore » change-in-storage. The major sink for phosphorus is change-in-storage. River exports of N and P were 17% and 8%, respectively, of the total N and P inputs. Over 60% of the N and P were volatilized or stored. The major input and output terms on the budget are estimated from direct measurements, but the change-in-storage term is calculated by difference. The factors regulating retention and storage processes are discussed and research needs are identified.« less
The Role of Learner and Input Variables in Learning Inflectional Morphology
ERIC Educational Resources Information Center
Brooks, Patricia J.; Kempe, Vera; Sionov, Ariel
2006-01-01
To examine effects of input and learner characteristics on morphology acquisition, 60 adult English speakers learned to inflect masculine and feminine Russian nouns in nominative, dative, and genitive cases. By varying training vocabulary size (i.e., type variability), holding constant the number of learning trials, we tested whether learners…
Wideband low-noise variable-gain BiCMOS transimpedance amplifier
NASA Astrophysics Data System (ADS)
Meyer, Robert G.; Mack, William D.
1994-06-01
A new monolithic variable gain transimpedance amplifier is described. The circuit is realized in BiCMOS technology and has measured gain of 98 kilo ohms, bandwidth of 128 MHz, input noise current spectral density of 1.17 pA/square root of Hz and input signal-current handling capability of 3 mA.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
NASA Astrophysics Data System (ADS)
Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.
2016-04-01
The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.
Measuring science: An exploration
Adams, James; Griliches, Zvi
1996-01-01
This paper examines the available United States data on academic research and development (R&D) expenditures and the number of papers published and the number of citations to these papers as possible measures of “output” of this enterprise. We look at these numbers for science and engineering as a whole, for five selected major fields, and at the individual university field level. The published data in Science and Engineering Indicators imply sharply diminishing returns to academic R&D using published papers as an “output” measure. These data are quite problematic. Using a newer set of data on papers and citations, based on an “expanding” set of journals and the newly released Bureau of Economic Analysis R&D deflators, changes the picture drastically, eliminating the appearance of diminishing returns but raising the question of why the input prices of academic R&D are rising so much faster than either the gross domestic product deflator or the implicit R&D deflator in industry. A production function analysis of such data at the individual field level follows. It indicates significant diminishing returns to “own” R&D, with the R&D coefficients hovering around 0.5 for estimates with paper numbers as the dependent variable and around 0.6 if total citations are used as the dependent variable. When we substitute scientists and engineers in place of R&D as the right-hand side variables, the coefficient on papers rises from 0.5 to 0.8, and the coefficient on citations rises from 0.6 to 0.9, indicating systematic measurement problems with R&D as the sole input into the production of scientific output. But allowing for individual university field effects drives these numbers down significantly below unity. Because in the aggregate both paper numbers and citations are growing as fast or faster than R&D, this finding can be interpreted as leaving a major, yet unmeasured, role for the contribution of spillovers from other fields, other universities, and other countries. PMID:8917477
Effects of Anthropogenic Nitrogen Loading on Riverine Nitrogen Export in the Northeastern USA
NASA Astrophysics Data System (ADS)
Boyer, E. W.; Goodale, C. L.; Howarth, R. W.
2001-05-01
Human activities have greatly altered the nitrogen (N) cycle, accelerating the rate of N fixation in landscapes and delivery of N to water bodies. To examine the effects of anthropogenic N inputs on riverine N export, we quantified N inputs and riverine N loss for 16 catchments along a latitudinal profile from Maine to Virginia, which encompass a range of climatic variability and are major drainages to the coast of the North Atlantic Ocean. We quantified inputs of N to each catchment: atmospheric deposition, fertilizer application, agricultural and forest biological N fixation, and the net import of N in food and feed. We compared these inputs with N losses from the system in riverine export. The importance of the relative sources varies widely by watershed and is related to land use. Atmospheric deposition was the largest source (>60%) to the forested catchments of northern New England (e.g., Penobscot and Kennebec); import of N in food was the largest source of N to the more populated regions of southern New England (e.g., Charles and Blackstone); and agricultural inputs were the dominant N sources in the Mid-Atlantic region (e.g., Schuylkill and Potomac). Total N inputs to each catchment increased with percent cover in agriculture and urban land, and decreased with percent forest. Over the combined area of the catchments, net atmospheric deposition was the largest single source input (34%), followed by imports of N in food and feed (24%), fixation in agricultural lands (21%), fertilizer use (15%), and fixation in forests (6%). Riverine export of N is well correlated with N inputs, but it accounts for only a fraction (28%) of the total N inputs. This work provides an understanding of the sources of N in landscapes, and highlights how human activities impact N cycling in the northeast region.
NASA Astrophysics Data System (ADS)
Sutula, Martha A.; Perez, Brian C.; Reyes, Enrique; Childers, Daniel L.; Davis, Steve; Day, John W.; Rudnick, David; Sklar, Fred
2003-08-01
Physical and biological processes controlling spatial and temporal variations in material concentration and exchange between the Southern Everglades wetlands and Florida Bay were studied for 2.5 years in three of the five major creek systems draining the watershed. Daily total nitrogen (TN), and total phosphorus (TP) fluxes were measured for 2 years in Taylor River, and ten 10-day intensive studies were conducted in this creek to estimate the seasonal flux of dissolved inorganic nitrogen (N), phosphorus (P), total organic carbon (TOC), and suspended matter. Four 10-day studies were conducted simultaneously in Taylor, McCormick, and Trout Creeks to study the spatial variation in concentration and flux. The annual fluxes of TOC, TN, and TP from the Southern Everglades were estimated from regression equations. The Southern Everglades watershed, a 460-km 2 area that includes Taylor Slough and the area south of the C-111 canal, exported 7.1 g C m -2, 0.46 g N m -2, and 0.007 g P m -2, annually. Everglades P flux is three to four orders of magnitude lower than published flux estimates from wetlands influenced by terrigenous sedimentary inputs. These low P flux values reflect both the inherently low P content of Everglades surface water and the efficiency of Everglades carbonate sediments and biota in conserving and recycling this limiting nutrient. The seasonal variation of freshwater input to the watershed was responsible for major temporal variations in N, P, and C export to Florida Bay; approximately 99% of the export occurred during the rainy season. Wind-driven forcing was most important during the later stages of the dry season when low freshwater head coincided with southerly winds, resulting in a net import of water and materials into the wetlands. We also observed an east to west decrease in TN:TP ratio from 212:1 to 127:1. Major spatial gradients in N:P ratios and nutrient concentration and flux among the creek were consistent with the westward decrease in surface water runoff from the P-limited Everglades and increased advection of relatively P-rich Gulf of Mexico (GOM) waters into Florida Bay. Comparison of measured nutrient flux from Everglades surface water inputs from this study with published estimates of other sources of nutrients to Florida Bay (i.e. atmospheric deposition, anthropogenic inputs from the Florida Keys, advection from the GOM) show that Everglades runoff represents only 2% of N inputs and 0.5% of P input to Florida Bay.
Partial Granger causality--eliminating exogenous inputs and latent variables.
Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng
2008-07-15
Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.
Optimal control of first order distributed systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Johnson, T. L.
1972-01-01
The problem of characterizing optimal controls for a class of distributed-parameter systems is considered. The system dynamics are characterized mathematically by a finite number of coupled partial differential equations involving first-order time and space derivatives of the state variables, which are constrained at the boundary by a finite number of algebraic relations. Multiple control inputs, extending over the entire spatial region occupied by the system ("distributed controls') are to be designed so that the response of the system is optimal. A major example involving boundary control of an unstable low-density plasma is developed from physical laws.
NASA Astrophysics Data System (ADS)
Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott
2013-10-01
The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Generation of Near-Inertial Currents on the Mid-Atlantic Bight by Hurricane Arthur (2014)
NASA Astrophysics Data System (ADS)
Zhang, Fan; Li, Ming; Miles, Travis
2018-04-01
Near-inertial currents (NICs) were observed on the Mid-Atlantic Bight (MAB) during the passage of Hurricane Arthur (2014). High-frequency radars showed that the surface currents were weak near the coast but increased in the offshore direction. The NICs were damped out in 3-4 days in the southern MAB but persisted for up to 10 days in the northern MAB. A Slocum glider deployed on the shelf recorded two-layer baroclinic currents oscillating at the inertial frequency. A numerical model was developed to interpret the observed spatial and temporal variabilities of the NICs and their vertical modal structure. Energy budget analysis showed that most of the differences in the NICs between the shelf and the deep ocean were determined by the spatial variations in wind energy input. In the southern MAB, energy dissipation quickly balanced the wind energy input, causing a rapid damping of the NICs. In the northern MAB, however, the dissipation lagged the wind energy input such that the NICs persisted. The model further showed that mode-1 waves dominated throughout the MAB shelf and accounted for over 70% of the current variability in the NICs. Rotary spectrum analyses revealed that the NICs were the largest component of the total kinetic energy except in the southern MAB and the inner shelf regions with strong tides. The NICs were also a major contributor to the shear spectrum over an extensive area of the MAB shelf and thus may play an important role in producing turbulent mixing and cooling of the surface mixed layer.
Drivers of the primate thalamus
Rovó, Zita; Ulbert, István; Acsády, László
2012-01-01
The activity of thalamocortical neurons is largely determined by giant excitatory terminals, called drivers. These afferents may arise from neocortex or from subcortical centers; however their exact distribution, segregation or putative absence in given thalamic nuclei are unknown. To unravel the nucleus-specific composition of drivers, we mapped the entire macaque thalamus utilizing vesicular glutamate transporters 1 and 2 to label cortical and subcortical afferents, respectively. Large thalamic territories were innervated exclusively either by giant vGLUT2- or vGLUT1-positive boutons. Co-distribution of drivers with different origin was not abundant. In several thalamic regions, no giant terminals of any type could be detected at light microscopic level. Electron microscopic observation of these territories revealed either the complete absence of large multisynaptic excitatory terminals (basal ganglia-recipient nuclei) or the presence of both vGLUT1- and vGLUT2-positive terminals, which were significantly smaller than their giant counterparts (intralaminar nuclei, medial pulvinar). In the basal ganglia-recipient thalamus, giant inhibitory terminals replaced the excitatory driver inputs. The pulvinar and the mediodorsal nucleus displayed subnuclear heterogeneity in their driver assemblies. These results show that distinct thalamic territories can be under pure subcortical or cortical control; however there is significant variability in the composition of major excitatory inputs in several thalamic regions. Since thalamic information transfer depends on the origin and complexity of the excitatory inputs, this suggests that the computations performed by individual thalamic regions display considerable variability. Finally, the map of driver distribution may help to resolve the morphological basis of human diseases involving different parts of the thalamus. PMID:23223308
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment
ERIC Educational Resources Information Center
Alejo, Rafael; Piquer-Píriz, Ana
2016-01-01
The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…
Variable Input and the Acquisition of Plural Morphology
ERIC Educational Resources Information Center
Miller, Karen L.; Schmitt, Cristina
2012-01-01
The present article examines the effect of variable input on the acquisition of plural morphology in two varieties of Spanish: Chilean Spanish, where the plural marker is sometimes omitted due to a phonological process of syllable final /s/ lenition, and Mexican Spanish (of Mexico City), with no such lenition process. The goal of the study is to…
Precision digital pulse phase generator
McEwan, T.E.
1996-10-08
A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.
Precision digital pulse phase generator
McEwan, Thomas E.
1996-01-01
A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.
Radio-loud AGN Variability from Propagating Relativistic Jets
NASA Astrophysics Data System (ADS)
Li, Yutong; Schuh, Terance; Wiita, Paul J.
2018-06-01
The great majority of variable emission in radio-loud AGNs is understood to arise from the relativistic flows of plasma along two oppositely directed jets. We study this process using the Athena hydrodynamics code to simulate propagating three-dimensional relativistic jets for a wide range of input jet velocities and jet-to-ambient matter density ratios. We then focus on those simulations that remain essentially stable for extended distances (60-120 times the jet radius). Adopting results for the densities, pressures and velocities from these propagating simulations we estimate emissivities from each cell. The observed emissivity from each cell is strongly dependent upon its variable Doppler boosting factor, which depends upon the changing bulk velocities in those zones with respect to our viewing angle to the jet. We then sum the approximations to the fluxes from a large number of zones upstream of the primary reconfinement shock. The light curves so produced are similar to those of blazars, although turbulence on sub-grid scales is likely to be important for the variability on the shortest timescales.
Regenerative braking device with rotationally mounted energy storage means
Hoppie, Lyle O.
1982-03-16
A regenerative braking device for an automotive vehicle includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (30) and an output shaft (32), clutches (50, 56) and brakes (52, 58) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. In a second embodiment the clutches and brakes are dispensed with and the variable ratio transmission is connected directly across the input and output shafts. In both embodiments the rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft rotates faster or relative to the output shaft and are torsionally relaxed to deliver energy to the vehicle when the output shaft rotates faster or relative to the input shaft.
NASA Technical Reports Server (NTRS)
Carlson, C. R.
1981-01-01
The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.
Nájera, S; Gil-Martínez, M; Zambrano, J A
2015-01-01
The aim of this paper is to establish and quantify different operational goals and control strategies in autothermal thermophilic aerobic digestion (ATAD). This technology appears as an alternative to conventional sludge digestion systems. During the batch-mode reaction, high temperatures promote sludge stabilization and pasteurization. The digester temperature is usually the only online, robust, measurable variable. The average temperature can be regulated by manipulating both the air injection and the sludge retention time. An improved performance of diverse biochemical variables can be achieved through proper manipulation of these inputs. However, a better quality of treated sludge usually implies major operating costs or a lower production rate. Thus, quality, production and cost indices are defined to quantify the outcomes of the treatment. Based on these, tradeoff control strategies are proposed and illustrated through some examples. This paper's results are relevant to guide plant operators, to design automatic control systems and to compare or evaluate the control performance on ATAD systems.
Park, Heesu; Dong, Suh-Yeon; Lee, Miran; Youn, Inchan
2017-07-24
Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little or no movement. To solve these problems, we propose a novel approach using an accelerometer and electrocardiogram (ECG). First, we collected a database of six activities (sitting, standing, walking, ascending, resting and running) of 13 voluntary participants. We compared the HAR performances of three models with respect to the input data type (with none, all, or some of the heart-rate variability (HRV) parameters). The best recognition performance was 96.35%, which was obtained with some selected HRV parameters. EE was also estimated for different choices of the input data type (with or without HRV parameters) and the model type (single and activity-specific). The best estimation performance was found in the case of the activity-specific model with HRV parameters. Our findings indicate that the use of human physiological data, obtained by wearable sensors, has a significant impact on both HAR and EE estimation, which are crucial functions in the mobile healthcare system.
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2013 CFR
2013-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2011 CFR
2011-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2012 CFR
2012-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2014 CFR
2014-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
Homeier, Jürgen; Hertel, Dietrich; Camenzind, Tessa; Cumbicus, Nixon L.; Maraun, Mark; Martinson, Guntars O.; Poma, L. Nohemy; Rillig, Matthias C.; Sandmann, Dorothee; Scheu, Stefan; Veldkamp, Edzo; Wilcke, Wolfgang; Wullaert, Hans; Leuschner, Christoph
2012-01-01
Tropical regions are facing increasing atmospheric inputs of nutrients, which will have unknown consequences for the structure and functioning of these systems. Here, we show that Neotropical montane rainforests respond rapidly to moderate additions of N (50 kg ha−1 yr−1) and P (10 kg ha−1 yr−1). Monitoring of nutrient fluxes demonstrated that the majority of added nutrients remained in the system, in either soil or vegetation. N and P additions led to not only an increase in foliar N and P concentrations, but also altered soil microbial biomass, standing fine root biomass, stem growth, and litterfall. The different effects suggest that trees are primarily limited by P, whereas some processes—notably aboveground productivity—are limited by both N and P. Highly variable and partly contrasting responses of different tree species suggest marked changes in species composition and diversity of these forests by nutrient inputs in the long term. The unexpectedly fast response of the ecosystem to moderate nutrient additions suggests high vulnerability of tropical montane forests to the expected increase in nutrient inputs. PMID:23071734
Mall, David; Larsen, Ashley E; Martin, Emily A
2018-01-05
Transforming modern agriculture towards both higher yields and greater sustainability is critical for preserving biodiversity in an increasingly populous and variable world. However, the intensity of agricultural practices varies strongly between crop systems. Given limited research capacity, it is crucial to focus efforts to increase sustainability in the crop systems that need it most. In this study, we investigate the match (or mismatch) between the intensity of pesticide use and the availability of knowledge on the ecosystem service of natural pest control across various crop systems. Using a systematic literature search on pest control and publicly available pesticide data, we find that pest control literature is not more abundant in crops where insecticide input per hectare is highest. Instead, pest control literature is most abundant, with the highest number of studies published, in crops with comparatively low insecticide input per hectare but with high world harvested area. These results suggest that a major increase of interest in agroecological research towards crops with high insecticide input, particularly cotton and horticultural crops such as citrus and high value-added vegetables, would help meet knowledge needs for a timely ecointensification of agriculture.
Larsen, Ashley E.
2018-01-01
Transforming modern agriculture towards both higher yields and greater sustainability is critical for preserving biodiversity in an increasingly populous and variable world. However, the intensity of agricultural practices varies strongly between crop systems. Given limited research capacity, it is crucial to focus efforts to increase sustainability in the crop systems that need it most. In this study, we investigate the match (or mismatch) between the intensity of pesticide use and the availability of knowledge on the ecosystem service of natural pest control across various crop systems. Using a systematic literature search on pest control and publicly available pesticide data, we find that pest control literature is not more abundant in crops where insecticide input per hectare is highest. Instead, pest control literature is most abundant, with the highest number of studies published, in crops with comparatively low insecticide input per hectare but with high world harvested area. These results suggest that a major increase of interest in agroecological research towards crops with high insecticide input, particularly cotton and horticultural crops such as citrus and high value-added vegetables, would help meet knowledge needs for a timely ecointensification of agriculture. PMID:29304005
Li, Chunyan; Tripathi, Pradeep K; Armstrong, William E
2007-01-01
The firing pattern of magnocellular neurosecretory neurons is intimately related to hormone release, but the relative contribution of synaptic versus intrinsic factors to the temporal dispersion of spikes is unknown. In the present study, we examined the firing patterns of vasopressin (VP) and oxytocin (OT) supraoptic neurons in coronal slices from virgin female rats, with and without blockade of inhibitory and excitatory synaptic currents. Inhibitory postsynaptic currents (IPSCs) were twice as prevalent as their excitatory counterparts (EPSCs), and both were more prevalent in OT compared with VP neurons. Oxytocin neurons fired more slowly and irregularly than VP neurons near threshold. Blockade of Cl− currents (including tonic and synaptic currents) with picrotoxin reduced interspike interval (ISI) variability of continuously firing OT and VP neurons without altering input resistance or firing rate. Blockade of EPSCs did not affect firing pattern. Phasic bursting neurons (putative VP neurons) were inconsistently affected by broad synaptic blockade, suggesting that intrinsic factors may dominate the ISI distribution during this mode in the slice. Specific blockade of synaptic IPSCs with gabazine also reduced ISI variability, but only in OT neurons. In all cases, the effect of inhibitory blockade on firing pattern was independent of any consistent change in input resistance or firing rate. Since the great majority of IPSCs are randomly distributed, miniature events (mIPSCs) in the coronal slice, these findings imply that even mIPSCs can impart irregularity to the firing pattern of OT neurons in particular, and could be important in regulating spike patterning in vivo. For example, the increased firing variability that precedes bursting in OT neurons during lactation could be related to significant changes in synaptic activity. PMID:17332000
The Effect of Visual Variability on the Learning of Academic Concepts.
Bourgoyne, Ashley; Alt, Mary
2017-06-10
The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.
Spahr, Norman E.; Mueller, David K.; Wolock, David M.; Hitt, Kerie J.; Gronberg, JoAnn M.
2010-01-01
Data collected for the U.S. Geological Survey National Water-Quality Assessment program from 1992-2001 were used to investigate the relations between nutrient concentrations and nutrient sources, hydrology, and basin characteristics. Regression models were developed to estimate annual flow-weighted concentrations of total nitrogen and total phosphorus using explanatory variables derived from currently available national ancillary data. Different total-nitrogen regression models were used for agricultural (25 percent or more of basin area classified as agricultural land use) and nonagricultural basins. Atmospheric, fertilizer, and manure inputs of nitrogen, percent sand in soil, subsurface drainage, overland flow, mean annual precipitation, and percent undeveloped area were significant variables in the agricultural basin total nitrogen model. Significant explanatory variables in the nonagricultural total nitrogen model were total nonpoint-source nitrogen input (sum of nitrogen from manure, fertilizer, and atmospheric deposition), population density, mean annual runoff, and percent base flow. The concentrations of nutrients derived from regression (CONDOR) models were applied to drainage basins associated with the U.S. Environmental Protection Agency (USEPA) River Reach File (RF1) to predict flow-weighted mean annual total nitrogen concentrations for the conterminous United States. The majority of stream miles in the Nation have predicted concentrations less than 5 milligrams per liter. Concentrations greater than 5 milligrams per liter were predicted for a broad area extending from Ohio to eastern Nebraska, areas spatially associated with greater application of fertilizer and manure. Probabilities that mean annual total-nitrogen concentrations exceed the USEPA regional nutrient criteria were determined by incorporating model prediction uncertainty. In all nutrient regions where criteria have been established, there is at least a 50 percent probability of exceeding the criteria in more than half of the stream miles. Dividing calibration sites into agricultural and nonagricultural groups did not improve the explanatory capability for total phosphorus models. The group of explanatory variables that yielded the lowest model error for mean annual total phosphorus concentrations includes phosphorus input from manure, population density, amounts of range land and forest land, percent sand in soil, and percent base flow. However, the large unexplained variability and associated model error precluded the use of the total phosphorus model for nationwide extrapolations.
Not All Children Agree: Acquisition of Agreement when the Input Is Variable
ERIC Educational Resources Information Center
Miller, Karen
2012-01-01
In this paper we investigate the effect of variable input on the acquisition of grammar. More specifically, we examine the acquisition of the third person singular marker -s on the auxiliary "do" in comprehension and production in two groups of children who are exposed to similar varieties of English but that differ with respect to adult…
Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span. PMID:27974883
Inthachot, Montri; Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span.
Improving permafrost distribution modelling using feature selection algorithms
NASA Astrophysics Data System (ADS)
Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail
2016-04-01
The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its overall operation. It operates by constructing a large collection of decorrelated classification trees, and then predicts the permafrost occurrence through a majority vote. With the so-called out-of-bag (OOB) error estimate, the classification of permafrost data can be validated as well as the contribution of each predictor can be assessed. The performances of compared permafrost distribution models (computed on independent testing sets) increased with the application of FS algorithms on the original dataset and irrelevant or redundant variables were removed. As a consequence, the process provided faster and more cost-effective predictors and a better understanding of the underlying structures residing in permafrost data. Our work demonstrates the usefulness of a feature selection step prior to applying a machine learning algorithm. In fact, permafrost predictors could be ranked not only based on their heuristic and subjective importance (expert knowledge), but also based on their statistical relevance in relation of the permafrost distribution.
NASA Astrophysics Data System (ADS)
Kettner, A. J.; Syvitski, J. P.; Restrepo, J. D.
2008-12-01
This study explores the application of an empirical sediment flux model BQART, to simulate long-term sediment fluxes of major tributaries of a river system based on a limited number of input parameters. We validate model results against data of the 1612 km long Magdalena River, Colombia, South America, which is well monitored. The Magdalena River, draining a hinterland area of 257,438 km2, of which the majority lies in the Andes before reaching the Atlantic coast, is known for its high sediment yield, 560 t kg- 2 yr-1; higher than nearby South American rivers like the Amazon or the Orinoco River. Sediment fluxes of 32 tributary basins of the Magdalena River were simulated based on the following controlling factors: geomorphic influences (tributary-basin area and relief) derived from high-resolution Shuttle Radar Topography Mission data, tributary basin-integrated lithology based on GIS analysis of lithology data, 30year temperature data, and observed monthly mean discharge data records (varying in record length of 15 to 60 years). Preliminary results indicate that the simulated sediment flux of all 32 tributaries matches the observational record, given the observational error and the annual variability. These simulations did not take human influences into account yet, which often increases sediment fluxes by accelerating erosion, especially in steep mountainous area similar to the Magdalena. Simulations indicate that, with relatively few input parameters, mostly derived from remotely-sensed data or existing compiled GIS datasets, it is possible to predict: which tributaries in an arbitrary river drainage produce relatively high contributions to sediment yields, and where in the drainage basin you might expect conveyance loss.
Testing of Two-Speed Transmission Configurations for Use in Rotorcraft
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Stevens, Mark A.
2015-01-01
Large civil tiltrotors have been identified to replace regional airliners over medium ranges to alleviate next-generation air traffic. Variable rotor speed for these vehicles is required for efficient high-speed operation. Two-speed drive system research has been performed to support these advanced rotorcraft applications. Experimental tests were performed on two promising two-speed transmission configurations. The offset compound gear (OCG) transmission and the dual star/idler (DSI) planetary transmission were tested in the NASA Glenn Research Center variable-speed transmission test facility. Both configurations were inline devices with concentric input and output shafts and designed to provide 1:1 and 2:1 output speed reduction ratios. Both were designed for 200 hp and 15,000 rpm input speed and had a dry shift clutch configuration. Shift tests were performed on the transmissions at input speeds of 5,000, 8,000, 10,000, 12,500, and 15,000 rpm. Both the OCG and DSI configurations successfully perform speed shifts at full rated 15,000 rpm input speed. The transient shifting behavior of the OCG and DSI configurations were very similar. The shift clutch had more of an effect on shifting dynamics than the reduction gearing configuration itself since the same shift clutch was used in both configurations. For both OCG and DSI configurations, low-to-high speed shifts were limited in applied torque levels in order to prevent overloads on the transmission due to transient torque spikes. It is believed that the relative lack of appreciable slippage of the dry shifting clutch at operating conditions and pressure profiles tested was a major cause of the transient torque spikes. For the low-to-high speed shifts, the output speed ramp-up time slightly decreased and the peak out torque slightly increased as the clutch pressure ramp-down rate increased. This was caused by slightly less clutch slippage as the clutch pressure ramp-down rate increased.
NASA Astrophysics Data System (ADS)
Boyer, E. W.; Goodale, C. L.; Howarth, R. W.; VanBreemen, N.
2001-12-01
Inputs of nitrogen (N) to aquatic and terrestrial ecosystems have increased during recent decades, primarily from the production and use of fertilizers, the planting of N-fixing crops, and the combustion of fossil fuels. We present mass-balanced budgets of N for 16 catchments along a latitudinal profile from Maine to Virginia, which encompass a range of climatic variability and are major drainages to the coast of the North Atlantic Ocean. We quantify inputs of N to each catchment from atmospheric deposition, application of nitrogenous fertilizers, biological nitrogen fixation by crops and trees, and import of N in agricultural products (food and feed). We relate these input terms to losses of N (total, organic, and nitrate) in streamflow. The importance of the relative N sources to N exports varies widely by watershed and is related to land use. Atmospheric deposition was the largest source of N to the forested catchments of northern New England (e.g., Penobscot and Kennebec); import of N in food was the largest source of N to the more populated regions of southern New England (e.g., Charles and Blackstone); and agricultural inputs were the dominant N sources in the Mid-Atlantic region (e.g., Schuylkill and Potomac). In all catchments, N inputs greatly exceed outputs, implying additional loss terms (e.g., denitrification or volatilization and transport of animal wastes), or changes in internal N stores (e.g, accumulation of N in vegetation, soil, or groundwater). We use our N budgets and several modeling approaches to constrain estimates about the fate of this excess N, including estimates of N storage in accumulating woody biomass, N losses due to in-stream denitrification, and more. This work is an effort of the SCOPE Nitrogen Project.
INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE
Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval
2008-01-01
SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077
Rain forest nutrient cycling and productivity in response to large-scale litter manipulation.
Wood, Tana E; Lawrence, Deborah; Clark, Deborah A; Chazdon, Robin L
2009-01-01
Litter-induced pulses of nutrient availability could play an important role in the productivity and nutrient cycling of forested ecosystems, especially tropical forests. Tropical forests experience such pulses as a result of wet-dry seasonality and during major climatic events, such as strong El Niños. We hypothesized that (1) an increase in the quantity and quality of litter inputs would stimulate leaf litter production, woody growth, and leaf litter nutrient cycling, and (2) the timing and magnitude of this response would be influenced by soil fertility and forest age. To test these hypotheses in a Costa Rican wet tropical forest, we established a large-scale litter manipulation experiment in two secondary forest sites and four old-growth forest sites of differing soil fertility. In replicated plots at each site, leaves and twigs (< 2 cm diameter) were removed from a 400-m2 area and added to an adjacent 100-m2 area. This transfer was the equivalent of adding 5-25 kg/ha of organic P to the forest floor. We analyzed leaf litter mass, [N] and [P], and N and P inputs for addition, removal, and control plots over a two-year period. We also evaluated basal area increment of trees in removal and addition plots. There was no response of forest productivity or nutrient cycling to litter removal; however, litter addition significantly increased leaf litter production and N and P inputs 4-5 months following litter application. Litter production increased as much as 92%, and P and N inputs as much as 85% and 156%, respectively. In contrast, litter manipulation had no significant effect on woody growth. The increase in leaf litter production and N and P inputs were significantly positively related to the total P that was applied in litter form. Neither litter treatment nor forest type influenced the temporal pattern of any of the variables measured. Thus, environmental factors such as rainfall drive temporal variability in litter and nutrient inputs, while nutrient release from decomposing litter influences the magnitude. Seasonal or annual variation in leaf litter mass, such as occurs in strong El Niño events, could positively affect leaf litter nutrient cycling and forest productivity, indicating an ability of tropical trees to rapidly respond to increased nutrient availability.
Thomas, R.E.
1959-01-20
An electronic circuit is presented for automatically computing the product of two selected variables by multiplying the voltage pulses proportional to the variables. The multiplier circuit has a plurality of parallel resistors of predetermined values connected through separate gate circults between a first input and the output terminal. One voltage pulse is applied to thc flrst input while the second voltage pulse is applied to control circuitry for the respective gate circuits. Thc magnitude of the second voltage pulse selects the resistors upon which the first voltage pulse is imprcssed, whereby the resultant output voltage is proportional to the product of the input voltage pulses
Rispoli, Matthew; Holt, Janet K.
2017-01-01
Purpose This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech. Method Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth. Results Instruction increased parent use of full is declaratives (ηp 2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity. Conclusions These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system. PMID:28892819
Global agricultural intensification during climate change: a role for genomics.
Abberton, Michael; Batley, Jacqueline; Bentley, Alison; Bryant, John; Cai, Hongwei; Cockram, James; de Oliveira, Antonio Costa; Cseke, Leland J; Dempewolf, Hannes; De Pace, Ciro; Edwards, David; Gepts, Paul; Greenland, Andy; Hall, Anthony E; Henry, Robert; Hori, Kiyosumi; Howe, Glenn Thomas; Hughes, Stephen; Humphreys, Mike; Lightfoot, David; Marshall, Athole; Mayes, Sean; Nguyen, Henry T; Ogbonnaya, Francis C; Ortiz, Rodomiro; Paterson, Andrew H; Tuberosa, Roberto; Valliyodan, Babu; Varshney, Rajeev K; Yano, Masahiro
2016-04-01
Agriculture is now facing the 'perfect storm' of climate change, increasing costs of fertilizer and rising food demands from a larger and wealthier human population. These factors point to a global food deficit unless the efficiency and resilience of crop production is increased. The intensification of agriculture has focused on improving production under optimized conditions, with significant agronomic inputs. Furthermore, the intensive cultivation of a limited number of crops has drastically narrowed the number of plant species humans rely on. A new agricultural paradigm is required, reducing dependence on high inputs and increasing crop diversity, yield stability and environmental resilience. Genomics offers unprecedented opportunities to increase crop yield, quality and stability of production through advanced breeding strategies, enhancing the resilience of major crops to climate variability, and increasing the productivity and range of minor crops to diversify the food supply. Here we review the state of the art of genomic-assisted breeding for the most important staples that feed the world, and how to use and adapt such genomic tools to accelerate development of both major and minor crops with desired traits that enhance adaptation to, or mitigate the effects of climate change. © 2015 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.
Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
Kernel-PCA data integration with enhanced interpretability
2014-01-01
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747
NASA Astrophysics Data System (ADS)
Forsythe, N.; Blenkinsop, S.; Fowler, H. J.
2015-05-01
A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.
NASA Astrophysics Data System (ADS)
Hao, Wenrui; Lu, Zhenzhou; Li, Luyi
2013-05-01
In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.
Learning place cells, grid cells and invariances with excitatory and inhibitory plasticity
2018-01-01
Neurons in the hippocampus and adjacent brain areas show a large diversity in their tuning to location and head direction, and the underlying circuit mechanisms are not yet resolved. In particular, it is unclear why certain cell types are selective to one spatial variable, but invariant to another. For example, place cells are typically invariant to head direction. We propose that all observed spatial tuning patterns – in both their selectivity and their invariance – arise from the same mechanism: Excitatory and inhibitory synaptic plasticity driven by the spatial tuning statistics of synaptic inputs. Using simulations and a mathematical analysis, we show that combined excitatory and inhibitory plasticity can lead to localized, grid-like or invariant activity. Combinations of different input statistics along different spatial dimensions reproduce all major spatial tuning patterns observed in rodents. Our proposed model is robust to changes in parameters, develops patterns on behavioral timescales and makes distinctive experimental predictions. PMID:29465399
Multi-muscle FES force control of the human arm for arbitrary goals.
Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M
2014-05-01
We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.
Development and weighting of a life cycle assessment screening model
NASA Astrophysics Data System (ADS)
Bates, Wayne E.; O'Shaughnessy, James; Johnson, Sharon A.; Sisson, Richard
2004-02-01
Nearly all life cycle assessment tools available today are high priced, comprehensive and quantitative models requiring a significant amount of data collection and data input. In addition, most of the available software packages require a great deal of training time to learn how to operate the model software. Even after this time investment, results are not guaranteed because of the number of estimations and assumptions often necessary to run the model. As a result, product development, design teams and environmental specialists need a simplified tool that will allow for the qualitative evaluation and "screening" of various design options. This paper presents the development and design of a generic, qualitative life cycle screening model and demonstrates its applicability and ease of use. The model uses qualitative environmental, health and safety factors, based on site or product-specific issues, to sensitize the overall results for a given set of conditions. The paper also evaluates the impact of different population input ranking values on model output. The final analysis is based on site or product-specific variables. The user can then evaluate various design changes and the apparent impact or improvement on the environment, health and safety, compliance cost and overall corporate liability. Major input parameters can be varied, and factors such as materials use, pollution prevention, waste minimization, worker safety, product life, environmental impacts, return of investment, and recycle are evaluated. The flexibility of the model format will be discussed in order to demonstrate the applicability and usefulness within nearly any industry sector. Finally, an example using audience input value scores will be compared to other population input results.
Plio-Pleistocene evolution of water mass exchange and erosional input at the Atlantic-Arctic gateway
NASA Astrophysics Data System (ADS)
Teschner, Claudia; Frank, Martin; Haley, Brian A.; Knies, Jochen
2016-05-01
Water mass exchange between the Arctic Ocean and the Norwegian-Greenland Seas has played an important role for the Atlantic thermohaline circulation and Northern Hemisphere climate. We reconstruct past water mass mixing and erosional inputs from the radiogenic isotope compositions of neodymium (Nd), lead (Pb), and strontium (Sr) at Ocean Drilling Program site 911 (leg 151) from 906 m water depth on Yermak Plateau in the Fram Strait over the past 5.2 Myr. The isotopic compositions of past bottom waters were extracted from authigenic oxyhydroxide coatings of the bulk sediments. Neodymium isotope signatures obtained from surface sediments agree well with present-day deepwater ɛNd signature of -11.0 ± 0.2. Prior to 2.7 Ma the Nd and Pb isotope compositions of the bottom waters only show small variations indicative of a consistent influence of Atlantic waters. Since the major intensification of the Northern Hemisphere Glaciation at 2.7 Ma the seawater Nd isotope composition has varied more pronouncedly due to changes in weathering inputs related to the waxing and waning of the ice sheets on Svalbard, the Barents Sea, and the Eurasian shelf, due to changes in water mass exchange and due to the increasing supply of ice-rafted debris (IRD) originating from the Arctic Ocean. The seawater Pb isotope record also exhibits a higher short-term variability after 2.7 Ma, but there is also a trend toward more radiogenic values, which reflects a combination of changes in input sources and enhanced incongruent weathering inputs of Pb released from freshly eroded old continental rocks.
Logarithmic and power law input-output relations in sensory systems with fold-change detection.
Adler, Miri; Mayo, Avi; Alon, Uri
2014-08-01
Two central biophysical laws describe sensory responses to input signals. One is a logarithmic relationship between input and output, and the other is a power law relationship. These laws are sometimes called the Weber-Fechner law and the Stevens power law, respectively. The two laws are found in a wide variety of human sensory systems including hearing, vision, taste, and weight perception; they also occur in the responses of cells to stimuli. However the mechanistic origin of these laws is not fully understood. To address this, we consider a class of biological circuits exhibiting a property called fold-change detection (FCD). In these circuits the response dynamics depend only on the relative change in input signal and not its absolute level, a property which applies to many physiological and cellular sensory systems. We show analytically that by changing a single parameter in the FCD circuits, both logarithmic and power-law relationships emerge; these laws are modified versions of the Weber-Fechner and Stevens laws. The parameter that determines which law is found is the steepness (effective Hill coefficient) of the effect of the internal variable on the output. This finding applies to major circuit architectures found in biological systems, including the incoherent feed-forward loop and nonlinear integral feedback loops. Therefore, if one measures the response to different fold changes in input signal and observes a logarithmic or power law, the present theory can be used to rule out certain FCD mechanisms, and to predict their cooperativity parameter. We demonstrate this approach using data from eukaryotic chemotaxis signaling.
Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program
NASA Technical Reports Server (NTRS)
Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.
1981-01-01
The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...
Multiplexer and time duration measuring circuit
Gray, Jr., James
1980-01-01
A multiplexer device is provided for multiplexing data in the form of randomly developed, variable width pulses from a plurality of pulse sources to a master storage. The device includes a first multiplexer unit which includes a plurality of input circuits each coupled to one of the pulse sources, with all input circuits being disabled when one input circuit receives an input pulse so that only one input pulse is multiplexed by the multiplexer unit at any one time.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Prediction of problematic wine fermentations using artificial neural networks.
Román, R César; Hernández, O Gonzalo; Urtubia, U Alejandra
2011-11-01
Artificial neural networks (ANNs) have been used for the recognition of non-linear patterns, a characteristic of bioprocesses like wine production. In this work, ANNs were tested to predict problems of wine fermentation. A database of about 20,000 data from industrial fermentations of Cabernet Sauvignon and 33 variables was used. Two different ways of inputting data into the model were studied, by points and by fermentation. Additionally, different sub-cases were studied by varying the predictor variables (total sugar, alcohol, glycerol, density, organic acids and nitrogen compounds) and the time of fermentation (72, 96 and 256 h). The input of data by fermentations gave better results than the input of data by points. In fact, it was possible to predict 100% of normal and problematic fermentations using three predictor variables: sugars, density and alcohol at 72 h (3 days). Overall, ANNs were capable of obtaining 80% of prediction using only one predictor variable at 72 h; however, it is recommended to add more fermentations to confirm this promising result.
Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz
2014-01-01
The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Two-Stage Variable Sample-Rate Conversion System
NASA Technical Reports Server (NTRS)
Tkacenko, Andre
2009-01-01
A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.
Luo, Zhongkui; Feng, Wenting; Luo, Yiqi; Baldock, Jeff; Wang, Enli
2017-10-01
Soil organic carbon (SOC) dynamics are regulated by the complex interplay of climatic, edaphic and biotic conditions. However, the interrelation of SOC and these drivers and their potential connection networks are rarely assessed quantitatively. Using observations of SOC dynamics with detailed soil properties from 90 field trials at 28 sites under different agroecosystems across the Australian cropping regions, we investigated the direct and indirect effects of climate, soil properties, carbon (C) inputs and soil C pools (a total of 17 variables) on SOC change rate (r C , Mg C ha -1 yr -1 ). Among these variables, we found that the most influential variables on r C were the average C input amount and annual precipitation, and the total SOC stock at the beginning of the trials. Overall, C inputs (including C input amount and pasture frequency in the crop rotation system) accounted for 27% of the relative influence on r C , followed by climate 25% (including precipitation and temperature), soil C pools 24% (including pool size and composition) and soil properties (such as cation exchange capacity, clay content, bulk density) 24%. Path analysis identified a network of intercorrelations of climate, soil properties, C inputs and soil C pools in determining r C . The direct correlation of r C with climate was significantly weakened if removing the effects of soil properties and C pools, and vice versa. These results reveal the relative importance of climate, soil properties, C inputs and C pools and their complex interconnections in regulating SOC dynamics. Ignorance of the impact of changes in soil properties, C pool composition and C input (quantity and quality) on SOC dynamics is likely one of the main sources of uncertainty in SOC predictions from the process-based SOC models. © 2017 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
NASA Technical Reports Server (NTRS)
Vanderploeg, J. M.; Stewart, D. F.; Davis, J. R.
1986-01-01
Space motion sickness clinical characteristics, time course, prediction of susceptibility, and effectiveness of countermeasures were evaluated. Although there is wide individual variability, there appear to be typical patterns of symptom development. The duration of symptoms ranges from several hours to four days with the majority of individuals being symptom free by the end of third day. The etiology of this malady remains uncertain but evidence points to reinterpretation of otolith inputs as being a key factor in the response of the neurovestibular system. Prediction of susceptibility and severity remains unsatisfactory. Countermeasures tried include medications, preflight adaptation, and autogenic feedback training. No countermeasure is entirely successful in eliminating or alleviating symptoms.
Novel approach for streamflow forecasting using a hybrid ANFIS-FFA model
NASA Astrophysics Data System (ADS)
Yaseen, Zaher Mundher; Ebtehaj, Isa; Bonakdari, Hossein; Deo, Ravinesh C.; Danandeh Mehr, Ali; Mohtar, Wan Hanna Melini Wan; Diop, Lamine; El-shafie, Ahmed; Singh, Vijay P.
2017-11-01
The present study proposes a new hybrid evolutionary Adaptive Neuro-Fuzzy Inference Systems (ANFIS) approach for monthly streamflow forecasting. The proposed method is a novel combination of the ANFIS model with the firefly algorithm as an optimizer tool to construct a hybrid ANFIS-FFA model. The results of the ANFIS-FFA model is compared with the classical ANFIS model, which utilizes the fuzzy c-means (FCM) clustering method in the Fuzzy Inference Systems (FIS) generation. The historical monthly streamflow data for Pahang River, which is a major river system in Malaysia that characterized by highly stochastic hydrological patterns, is used in the study. Sixteen different input combinations with one to five time-lagged input variables are incorporated into the ANFIS-FFA and ANFIS models to consider the antecedent seasonal variations in historical streamflow data. The mean absolute error (MAE), root mean square error (RMSE) and correlation coefficient (r) are used to evaluate the forecasting performance of ANFIS-FFA model. In conjunction with these metrics, the refined Willmott's Index (Drefined), Nash-Sutcliffe coefficient (ENS) and Legates and McCabes Index (ELM) are also utilized as the normalized goodness-of-fit metrics. Comparison of the results reveals that the FFA is able to improve the forecasting accuracy of the hybrid ANFIS-FFA model (r = 1; RMSE = 0.984; MAE = 0.364; ENS = 1; ELM = 0.988; Drefined = 0.994) applied for the monthly streamflow forecasting in comparison with the traditional ANFIS model (r = 0.998; RMSE = 3.276; MAE = 1.553; ENS = 0.995; ELM = 0.950; Drefined = 0.975). The results also show that the ANFIS-FFA is not only superior to the ANFIS model but also exhibits a parsimonious modelling framework for streamflow forecasting by incorporating a smaller number of input variables required to yield the comparatively better performance. It is construed that the FFA optimizer can thus surpass the accuracy of the traditional ANFIS model in general, and is able to remove the false (inaccurately) forecasted data in the ANFIS model for extremely low flows. The present results have wider implications not only for streamflow forecasting purposes, but also for other hydro-meteorological forecasting variables requiring only the historical data input data, and attaining a greater level of predictive accuracy with the incorporation of the FFA algorithm as an optimization tool in an ANFIS model.
NASA Astrophysics Data System (ADS)
Bertin, Daniel
2017-02-01
An innovative 3-D numerical model for the dynamics of volcanic ballistic projectiles is presented here. The model focuses on ellipsoidal particles and improves previous approaches by considering horizontal wind field, virtual mass forces, and drag forces subjected to variable shape-dependent drag coefficients. Modeling suggests that the projectile's launch velocity and ejection angle are first-order parameters influencing ballistic trajectories. The projectile's density and minor radius are second-order factors, whereas both intermediate and major radii of the projectile are of third order. Comparing output parameters, assuming different input data, highlights the importance of considering a horizontal wind field and variable shape-dependent drag coefficients in ballistic modeling, which suggests that they should be included in every ballistic model. On the other hand, virtual mass forces should be discarded since they almost do not contribute to ballistic trajectories. Simulation results were used to constrain some crucial input parameters (launch velocity, ejection angle, wind speed, and wind azimuth) of the block that formed the biggest and most distal ballistic impact crater during the 1984-1993 eruptive cycle of Lascar volcano, Northern Chile. Subsequently, up to 106 simulations were performed, whereas nine ejection parameters were defined by a Latin-hypercube sampling approach. Simulation results were summarized as a quantitative probabilistic hazard map for ballistic projectiles. Transects were also done in order to depict aerial hazard zones based on the same probabilistic procedure. Both maps combined can be used as a hazard prevention tool for ground and aerial transits nearby unresting volcanoes.
NASA Astrophysics Data System (ADS)
Shin, K. H.; Kim, K. H.; Ki, S. J.; Lee, H. G.
2017-12-01
The vulnerability assessment tool at a Tier 1 level, although not often used for regulatory purposes, helps establish pollution prevention and management strategies in the areas of potential environmental concern such as soil and ground water. In this study, the Neural Network Pattern Recognition Tool embedded in MATLAB was used to allow the initial screening of soil and groundwater pollution based on data compiled across about 1000 previously contaminated sites in Korea. The input variables included a series of parameters which were tightly related to downward movement of water and contaminants through soil and ground water, whereas multiple classes were assigned to the sum of concentrations of major pollutants detected. Results showed that in accordance with diverse pollution indices for soil and ground water, pollution levels in both media were strongly modulated by site-specific characteristics such as intrinsic soil and other geologic properties, in addition to pollution sources and rainfall. However, classification accuracy was very sensitive to the number of classes defined as well as the types of the variables incorporated, requiring careful selection of input variables and output categories. Therefore, we believe that the proposed methodology is used not only to modify existing pollution indices so that they are more suitable for addressing local vulnerability, but also to develop a unique assessment tool to support decision making based on locally or nationally available data. This study was funded by a grant from the GAIA project(2016000560002), Korea Environmental Industry & Technology Institute, Republic of Korea.
Advances in Estimating Methane Emissions from Enteric Fermentation
NASA Astrophysics Data System (ADS)
Kebreab, E.; Appuhamy, R.
2016-12-01
Methane from enteric fermentation of livestock is the largest contributor to the agricultural GHG emissions. The quantification of methane emissions from livestock on a global scale relies on prediction models because measurements require specialized equipment and may be expensive. Most countries use a fixed number (kg methane/year) or calculate as a proportion of energy intake to estimate enteric methane emissions in national inventories. However, diet composition significantly regulates enteric methane production in addition to total feed intake and thus the main target in formulating mitigation options. The two current methodologies are not able to assess mitigation options, therefore, new estimation methods are required that can take feed composition into account. The availability of information on livestock production systems has increased substantially enabling the development of more detailed methane prediction models. Limited number of process-based models have been developed that represent biological relationships in methane production, however, these require extensive inputs and specialized software that may not be easily available. Empirical models may provide a better alternative in practical situations due to less input requirements. Several models have been developed in the last 10 years but none of them work equally well across all regions of the world. The more successful models particularly in North America require three major inputs: feed (or energy) intake, fiber and fat concentration of the diet. Given the significant variability of emissions within regions, models that are able to capture regional variability of feed intake and diet composition perform the best in model evaluation with independent data. The utilization of such models may reduce uncertainties associated with prediction of methane emissions and allow a better examination and representation of policies regulating emissions from cattle.
Jones, B.H.; Noble, M.A.; Dickey, T.D.
2002-01-01
Moorings and towyo mapping were used to study the temporal and spatial variability of physical processes and suspended particulate material over the continental shelf of the Palos Verdes Peninsula in southwestern Los Angeles, California during the late summer of 1992 and winter of 1992-93. Seasonal evolution of the hydrographic structure is related to seasonal atmospheric forcing. During summer, stratification results from heating of the upper layer. Summer insolation coupled with the stratification results in a slight salinity increase nearsurface due to evaporation. Winter cooling removes much of the upper layer stratification, but winter storms can introduce sufficient quantities of freshwater into the shelf water column again adding stratification through the buoyancy input. Vertical mixing of the low salinity surface water deeper into the water column decreases the sharp nearsurface stratification and reduces the overall salinity of the upper water column. Moored conductivity measurements indicate that the decreased salinity persisted for at least 2 months after a major storm with additional freshwater inputs through the period. Four particulate groups contributed to the suspended particulate load in the water column: phytoplankton, resuspended sediments, and particles in treated sewage effluent were observed in every towyo mapping cruise; terrigenous particles are introduced through runoff from winter rainstorms. Terrigenous suspended particulate material sinks from the water column in <9 days and phytoplankton respond to the stormwater input of buoyancy and nutrients within the same period. The suspended particles near the bottom have spatially patchy distributions, but are always present in hydrographic surveys of the shelf. Temporal variations in these particles do not show a significant tidal response, but they may be maintained in suspension by internal wave and tide processes impinging on the shelf. ?? 2002 Elsevier Science Ltd. All rights reserved.
Innovations in Basic Flight Training for the Indonesian Air Force
1990-12-01
microeconomic theory that could approximate the optimum mix of training hours between an aircraft and simulator, and therefore improve cost effectiveness...The microeconomic theory being used is normally employed when showing production with two variable inputs. An example of variable inputs would be labor...NAS Corpus Christi, Texas, Aerodynamics of the T-34C, 1989. 26. Naval Air Training Command, NAS Corpus Christi, Texas, Meteorological Theory Workbook
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
NASA Technical Reports Server (NTRS)
Meyn, Larry A.
2018-01-01
One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Rowe, Meredith L; Levine, Susan C; Fisher, Joan A; Goldin-Meadow, Susan
2009-01-01
Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that children with BI receive from their caregivers also contributes to this early plasticity, and whether linguistic input plays a similar role in children with BI as it does in typically developing (TD) children. Growth in vocabulary and syntactic production is modeled for 80 children (53 TD, 27 BI) between 14 and 46 months. Findings indicate that caregiver input is an equally potent predictor of vocabulary growth in children with BI and in TD children. In contrast, input is a more potent predictor of syntactic growth for children with BI than for TD children. Controlling for input, lesion characteristics (lesion size, type, seizure history) also affect the language trajectories of children with BI. Thus, findings illustrate how both variability in the environment (linguistic input) and variability in the organism (lesion characteristics) work together to contribute to plasticity in language learning.
Dummer, Benjamin; Wieland, Stefan; Lindner, Benjamin
2014-01-01
A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
NASA Astrophysics Data System (ADS)
Herbeck, Lucia; Kwiatkowski, Cornelia; Mohtadi, Mahyar; Jennerjahn, Tim
2014-05-01
Beginning a few thousand years ago, global climate and environmental change have become more and more affected by human activities. Hence, quantifying the 'human component' becomes increasingly important in order to predict future developments. Indonesia and the surrounding oceans are key in this respect, because it is in the region (i) that receives the highest inputs of water, sediment and associated dissolved and particulate substances and (ii) that suffers from anthropogenically modified landscapes and coastal zones. As opposing the global trend, land-based human activities have increased the sediment input into the ocean from Indonesia since pre-human times. Nevertheless, there are strong gradients in land use/cover and resulting river fluxes within Indonesia as, for example, between Java and Kalimantan. Major goal of this study is to identify the contribution of human activities in river catchments (i.e. land use/cover change, hydrological alterations) to gradients in carbon and nitrogen deposition in sediments of the Java Sea between densely populated Java and sparsely populated Kalimantan during the Late Holocene. We hypothesized that the riverine input of C and N increased during the late Holocene and increased more off Java than off Kalimantan. Sediment cores (80 to 130 cm long) off major river mouths from Java (2 cores off Bengawan Solo) and Kalimantan (1 core off Pembuang, 1 core off Jelai) were dated and analysed for Corg, Ntot, carbonate and stable isotope composition (δ13Corg, δ15N) in 3 cm intervals. Sedimentation rates off the Kalimantan rivers with 0.05-0.11 cm yr-1 were higher than off the Bengawan Solo, the largest river catchment on Java (<0.04 cm yr-1). Ntot contents in all sediment cores were low with ~0.07% and varied little over time. A higher Corg content, molar C/N ratio and variability over the past 5000 years in all parameters in the core closer to the river mouth off the Bengawan Solo than the one further offshore indicates that terrestrial input into the Java Sea was limited to approx. 15 km off the river mouth. Both cores off Kalimantan and the core off Java close to the Bengawan Solo had similar Corg contents (~0.8%) and molar C/N-ratios (11-19). δ13Corg of -24‰ and low carbonate contents (~7%) indicate an even higher contribution of terrigenous organic matter off the Kalimantan rivers than off the Bengawan Solo, where δ13Corg of -22‰ and CaCO3 contents of ~17% rather point to marine phytoplankton as major organic matter source. Our preliminary results indicate a higher input of terrigenous organic matter from Kalimantan than from Java and show little evidence for anthropogenic impact on organic matter inputs into the Java Sea during the late Holocene.
Hu, Qinglei
2007-10-01
This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Viswanathan, Sivaram; Jayakumar, Jaikishan; Vidyasagar, Trichur R
2015-09-01
Responses of most neurons in the primary visual cortex of mammals are markedly selective for stimulus orientation and their orientation tuning does not vary with changes in stimulus contrast. The basis of such contrast invariance of orientation tuning has been shown to be the higher variability in the response for low-contrast stimuli. Neurons in the lateral geniculate nucleus (LGN), which provides the major visual input to the cortex, have also been shown to have higher variability in their response to low-contrast stimuli. Parallel studies have also long established mild degrees of orientation selectivity in LGN and retinal cells. In our study, we show that contrast invariance of orientation tuning is already present in the LGN. In addition, we show that the variability of spike responses of LGN neurons increases at lower stimulus contrasts, especially for non-preferred orientations. We suggest that such contrast- and orientation-sensitive variability not only explains the contrast invariance observed in the LGN but can also underlie the contrast-invariant orientation tuning seen at the level of the primary visual cortex. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.
Marino, Dale J; Starr, Thomas B
2007-12-01
A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.
Textual Enhancement of Input: Issues and Possibilities
ERIC Educational Resources Information Center
Han, ZhaoHong; Park, Eun Sung; Combs, Charles
2008-01-01
The input enhancement hypothesis proposed by Sharwood Smith (1991, 1993) has stimulated considerable research over the last 15 years. This article reviews the research on textual enhancement of input (TE), an area where the majority of input enhancement studies have aggregated. Methodological idiosyncrasies are the norm of this body of research.…
Variable Delay Element For Jitter Control In High Speed Data Links
Livolsi, Robert R.
2002-06-11
A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.
Origin of information-limiting noise correlations
Kanitscheider, Ingmar; Coen-Cagli, Ruben; Pouget, Alexandre
2015-01-01
The ability to discriminate between similar sensory stimuli relies on the amount of information encoded in sensory neuronal populations. Such information can be substantially reduced by correlated trial-to-trial variability. Noise correlations have been measured across a wide range of areas in the brain, but their origin is still far from clear. Here we show analytically and with simulations that optimal computation on inputs with limited information creates patterns of noise correlations that account for a broad range of experimental observations while at same time causing information to saturate in large neural populations. With the example of a network of V1 neurons extracting orientation from a noisy image, we illustrate to our knowledge the first generative model of noise correlations that is consistent both with neurophysiology and with behavioral thresholds, without invoking suboptimal encoding or decoding or internal sources of variability such as stochastic network dynamics or cortical state fluctuations. We further show that when information is limited at the input, both suboptimal connectivity and internal fluctuations could similarly reduce the asymptotic information, but they have qualitatively different effects on correlations leading to specific experimental predictions. Our study indicates that noise at the sensory periphery could have a major effect on cortical representations in widely studied discrimination tasks. It also provides an analytical framework to understand the functional relevance of different sources of experimentally measured correlations. PMID:26621747
NASA Astrophysics Data System (ADS)
Ruiz-Bellet, Josep Lluís; Castelltort, Xavier; Balasch, J. Carles; Tuset, Jordi
2017-02-01
There is no clear, unified and accepted method to estimate the uncertainty of hydraulic modelling results. In historical floods reconstruction, due to the lower precision of input data, the magnitude of this uncertainty could reach a high value. With the objectives of giving an estimate of the peak flow error of a typical historical flood reconstruction with the model HEC-RAS and of providing a quick, simple uncertainty assessment that an end user could easily apply, the uncertainty of the reconstructed peak flow of a major flood in the Ebro River (NE Iberian Peninsula) was calculated with a set of local sensitivity analyses on six input variables. The peak flow total error was estimated at ±31% and water height was found to be the most influential variable on peak flow, followed by Manning's n. However, the latter, due to its large uncertainty, was the greatest contributor to peak flow total error. Besides, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation; all three methods gave similar peak flows. Manning's equation gave almost the same result than HEC-RAS. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed.
NASA Astrophysics Data System (ADS)
Kingsford, M. J.; Suthers, I. M.
1994-05-01
In 1990, low density estuarine plumes in the vicinity of Botany Bay, Australia, extended up to 11 km across a narrow continental shelf ( ca 25 km) on ebb tides. The shape and seaward extent of plumes varied according to a combination of state of the tide, freshwater input and the direction and intensity of coastal currents. Offshore plumes dissipated on the flood tide and fronts reformed at the entrance of Botany Bay. Major differences in the abundance and composition of ichthyoplankton and other zooplankton were found over a 400-800 m stretch of water encompassing waters of the plume, front and ocean on seven occasions. For example, highest abundances of the fishes Gobiidae, Sillaginidae, Gerreidae and Sparidae as well as barnacle larvae and fish eggs were found in plumes. Cross-shelf distribution patterns of zooplankton, therefore, are influenced by plumes. Distinct assemblages of plankters accumulated in fronts, e.g. fishes of the Mugilidae and Gonorynchidae and other zooplankters (e.g. Jaxea sp.). Accumulation in fronts was variable and may relate to variable convergence according to the tide. We argue that plumes provide a significant cue to larvae in coastal waters that an estuary is nearby. Moreover, although many larvae may be retained in the turbid waters of plumes associated with riverine input, larvae are potentially exported in surface waters on ebb tides.
Application of receptor models on water quality data in source apportionment in Kuantan River Basin
2012-01-01
Recent techniques in the management of surface river water have been expanding the demand on the method that can provide more representative of multivariate data set. A proper technique of the architecture of artificial neural network (ANN) model and multiple linear regression (MLR) provides an advance tool for surface water modeling and forecasting. The development of receptor model was applied in order to determine the major sources of pollutants at Kuantan River Basin, Malaysia. Thirteen water quality parameters were used in principal component analysis (PCA) and new variables of fertilizer waste, surface runoff, anthropogenic input, chemical and mineral changes and erosion are successfully developed for modeling purposes. Two models were compared in terms of efficiency and goodness-of-fit for water quality index (WQI) prediction. The results show that APCS-ANN model gives better performance with high R2 value (0.9680) and small root mean square error (RMSE) value (2.6409) compared to APCS-MLR model. Meanwhile from the sensitivity analysis, fertilizer waste acts as the dominant pollutant contributor (59.82%) to the basin studied followed by anthropogenic input (22.48%), surface runoff (13.42%), erosion (2.33%) and lastly chemical and mineral changes (1.95%). Thus, this study concluded that receptor modeling of APCS-ANN can be used to solve various constraints in environmental problem that exist between water distribution variables toward appropriate water quality management. PMID:23369363
Burns, A.W.
1988-01-01
This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)
A liquid lens switching-based motionless variable fiber-optic delay line
NASA Astrophysics Data System (ADS)
Khwaja, Tariq Shamim; Reza, Syed Azer; Sheikh, Mumtaz
2018-05-01
We present a Variable Fiber-Optic Delay Line (VFODL) module capable of imparting long variable delays by switching an input optical/RF signal between Single Mode Fiber (SMF) patch cords of different lengths through a pair of Electronically Controlled Tunable Lenses (ECTLs) resulting in a polarization-independent operation. Depending on intended application, the lengths of the SMFs can be chosen accordingly to achieve the desired VFODL operation dynamic range. If so desired, the state of the input signal polarization can be preserved with the use of commercially available polarization-independent ECTLs along with polarization-maintaining SMFs (PM-SMFs), resulting in an output polarization that is identical to the input. An ECTL-based design also improves power consumption and repeatability. The delay switching mechanism is electronically-controlled, involves no bulk moving parts, and can be fully-automated. The VFODL module is compact due to the use of small optical components and SMFs that can be packaged compactly.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
NASA Astrophysics Data System (ADS)
Beekman, F.; Hardebol, N.; Cloetingh, S.; Tesauro, M.
2006-12-01
Better understanding of 3D rheological heterogeneity of the European Lithosphere provide the key to tie the recorded intraplate deformation pattern to stress fields transmitted into plate interior from plate boundary forces. The first order strain patterns result from stresses transmitted through the European lithosphere that is marked by a patchwork of high strength variability from inherited structural and compositional heterogeneities and upper mantle thermal perturbations. As the lithospheric rheology depends primarily on its spatial structure, composition and thermal estate, the 3D strength model for the European lithosphere relies on a 3D compositional model that yields the compositional heterogeneities and an iteratively calculated thermal cube using Fouriers law for heat conduction. The accurate appraisal of spatial strength variability results from proper mapping and integration of the geophysical compositional and thermal input parameters. Therefore, much attention has been paid to a proper description of first order structural and tectonic features that facilitate compilation of the compositional and thermal input models. As such, the 3D strength model reflects the thermo-mechanical structure inherited from the Europeans polyphase deformation history. Major 3D spatial mechanical strength variability has been revealed. The East-European and Fennoscandian Craton to the NE exhibit high strength (30-50 1012 N/m) from low mantle temperatures and surface heatflow of 35-60 mW/m2 while central and western Europe reflect a polyphase Phanerozoic thermo- tectonic history. Here, regions with high rigidity are formed primarily by patches of thermally stabilized Variscan Massifs (e.g. Rhenish, Armorican, Bohemian, and Iberian Massif) with low heatflow and lithospheric thickness values (50-65 mW/m2; 110-150 km) yielding strengths of ~15-25 1012 N/m. In contrast, major axis of weakened lithosphere coincides with Cenozoic Rift System (e.g. Upper and Lower Rhine Grabens, Pannonian Basin and Massif Central) attributed to the presence of tomographically imaged plumes. This study has elucidated the memory of the present-days Europeans lithosphere induced by compositional and thermal heterogeneities. The resulting lateral strength variations has a clear signature of the pst lithospheres polyphase deformation and also entails active tectonics, tectonically induced topography and surface processes.
Suberin-derived aliphatic monomers as biomarkers for SOM affected by root litter contribution
NASA Astrophysics Data System (ADS)
Kogel-Knabner, I.; Spielvogel, S.-; Prietzel, J.-
2012-12-01
The patchy distribution of trees and ground vegetation may have major impact on SOC variability and stability at the small scale. Knowledge about correlations between the pattern of tree and ground vegetation, SOC stocks in different soil depths and the contribution of root- vs. shoot-derived carbon to different SOC fractions is scarce. We have tested analysis of hydrolysable aliphatic monomers derived from the biopolyesters cutin- and suberin to investigate whether their composition can be traced back after decay and transformation into soil organic matter (SOM) to study SOM source, degradation, and stand history. The main objective of this study was to elucidate the relative abundance of cutin and suberin in different particle size and density fractions of a Norway spruce and a European beech site with increasing distance to stems. Soil samples, root, bark and needle/leave samples were analyzed for their cutin and/or suberin signature. Previous to isolation of bound lipids, sequential solvent extraction was used to remove free lipids and other solvent extractable compounds. Cutin- and suberin-derived monomers were extracted from the samples using base hydrolysis. Before analysis by Gas Chromatography/Mass Spectrometry (GC/MS), extracts were derivatized to convert compounds to trimethylsilyl derivatives. Statistical analysis identified four variables which as combined factors discriminated significantly between cutin and suberin based on their structural units. We found a relative enrichment of cutin and suberin contents in the occluded fraction at both sites that decreased with increasing distance to the trees. We conclude from our results that (i) patchy above- and belowground carbon input caused by heterogeneous distribution of trees and ground vegetation has major impact on SOC variability and stability at the small scale, (ii) tree species is an important factor influencing SOC heterogeneity at the stand scale due to pronounced differences in above- and belowground carbon input among the tree species and that (iii) forest conversion may substantially alter SOC stocks and spatial distribution. Suberin biomarkers can thus be used as indicators for the presence of root influence on SOM composition and for identifying root-affected soil compartments.
Bueno, C; Brugnoli, E; Bergamino, L; Muniz, P; García-Rodríguez, F; Figueira, R
2018-01-01
This study is aimed to identify the different sources of sedimentary organic matter (SOM) within Montevideo coastal zone (MCZ). To this end δ 13 C, δ 15 N and C/N ratio were analysed in surface sediments and a sediment core. Sediment core analysis showed that until ~1950CE SOM was mainly marine, observing a shift towards lower δ 13 C in recent sediments, evidencing an estuarine composition. This trend was associated to the climatic variability, which exerted a major influence on the SOM composition, leading to an increased input of terrigenous material and associated anthropogenic contaminants. Surface sediments collected during different El Niño South Oscillation (ENSO) phases did not show inter-annual variability in SOM composition, which was mainly marine in both eastern and western region of MCZ and estuarine in Montevideo Bay. This spatial pattern provides new insights on the dynamics and factors affecting organic matter sources available for primary consumers along the study region. Copyright © 2017 Elsevier Ltd. All rights reserved.
Can unforced radiative variability explain the "hiatus"?
NASA Astrophysics Data System (ADS)
Donohoe, A.
2016-02-01
The paradox of the "hiatus" is characterized as a decade long period over which global mean surface temperature remained relatively constant even though greenhouse forcing forcing is believed to have been positive and increasing. Explanations of the hiatus have focused on two primary lines of thought: 1. There was a net radiative imbalance at the top of atmosphere (TOA) but this energy input was stored in the ocean without increasing surface temperature or 2. There was no radiative imbalance at the TOA because the greenhouse forcing was offset by other climate forcings. Here, we explore a third hypothesis: that there was no TOA radiative imbalance over the decade due to unforced, natural modes of radiative variability that are unrelated to global mean temperature. Is it possible that the Earth could emit enough radiation to offset greenhouse forcing without increasing its temperature due to internal modes of climate variability? Global mean TOA energy imbalance is estimated to be 0.65 W m-2 as determined from the long term change in ocean heat content - where the majority of the energy imbalance is stored. Therefore, in order to offset this TOA energy imbalance natural modes of radiative variability with amplitudes of order 0.5 W m-2 at the decadal timescale are required. We demonstrate that unforced coupled climate models have global mean radiative variability of the required magnitude (2 standard deviations of 0.57 W m-2 in the inter-model mean) and that the vast majority (>90%) of this variability is unrelated to surface temperature radiative feedbacks. However, much of this variability is at shorter (monthly and annual) timescales and does not persist from year to year making the possibility of a decade long natural interruption of the energy accumulation in the climate system unlikely due to natural radiative variability alone given the magnitude of the greenhouse forcing on Earth. Comparison to observed satellite data suggest the models capture the magnitude (2 sigma = 0.61 W m-2) and mechanisms of internal radiative variability but we cannot exclude the possibility of low frequency modes of variability with significant magnitude given the limited length of the satellite record.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Fichez, R; Chifflet, S; Douillet, P; Gérard, P; Gutierrez, F; Jouon, A; Ouillon, S; Grenz, C
2010-01-01
Considering the growing concern about the impact of anthropogenic inputs on coral reefs and coral reef lagoons, surprisingly little attention has been given to the relationship between those inputs and the trophic status of lagoon waters. The present paper describes the distribution of biogeochemical parameters in the coral reef lagoon of New Caledonia where environmental conditions allegedly range from pristine oligotrophic to anthropogenically influenced. The study objectives were to: (i) identify terrigeneous and anthropogenic inputs and propose a typology of lagoon waters, (ii) determine temporal variability of water biogeochemical parameters at time-scales ranging from hours to seasons. Combined ACP-cluster analyses revealed that over the 2000 km(2) lagoon area around the city of Nouméa, "natural" terrigeneous versus oceanic influences affecting all stations only accounted for less than 20% of the spatial variability whereas 60% of that spatial variability could be attributed to significant eutrophication of a limited number of inshore stations. ACP analysis allowed to unambiguously discriminating between the natural trophic enrichment along the offshore-inshore gradient and anthropogenically induced eutrophication. High temporal variability in dissolved inorganic nutrients concentrations strongly hindered their use as indicators of environmental status. Due to longer turn over time, particulate organic material and more specifically chlorophyll a appeared as more reliable nonconservative tracer of trophic status. Results further provided evidence that ENSO occurrences might temporarily lower the trophic status of the New Caledonia lagoon. It is concluded that, due to such high frequency temporal variability, the use of biogeochemical parameters in environmental surveys require adapted sampling strategies, data management and environmental alert methods. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Neural network simulation of soil NO3 dynamic under potato crop system
NASA Astrophysics Data System (ADS)
Goulet-Fortin, Jérôme; Morais, Anne; Anctil, François; Parent, Léon-Étienne; Bolinder, Martin
2013-04-01
Nitrate leaching is a major issue in sandy soils intensively cropped to potato. Modelling could test and improve management practices, particularly as regard to the optimal N application rates. Lack of input data is an important barrier for the application of classical process-based models to predict soil NO3 content (SNOC) and NO3 leaching (NOL). Alternatively, data driven models such as neural networks (NN) could better take into account indicators of spatial soil heterogeneity and plant growth pattern such as the leaf area index (LAI), hence reducing the amount of soil information required. The first objective of this study was to evaluate NN and hybrid models to simulate SNOC in the 0-40 cm soil layer considering inter-annual variations, spatial soil heterogeneity and differential N application rates. The second objective was to evaluate the same methodology to simulate seasonal NOL dynamic at 1 m deep. To this aim, multilayer perceptrons with different combinations of driving meteorological variables, functions of the LAI and state variables of external deterministic models have been trained and evaluated. The state variables from external models were: drainage estimated by the CLASS model and the soil temperature estimated by an ICBM subroutine. Results of SNOC simulations were compared to field data collected between 2004 and 2011 at several experimental plots under potato cropping systems in Québec, Eastern Canada. Results of NOL simulation were compared to data obtained in 2012 from 11 suction lysimeters installed in 2 experimental plots under potato cropping systems in the same region. The most performing model for SNOC simulation was obtained using a 4-input hybrid model composed of 1) cumulative LAI, 2) cumulative drainage, 3) soil temperature and 4) day of year. The most performing model for NOL simulation was obtained using a 5-input NN model composed of 1) N fertilization rate at spring, 2) LAI, 3) cumulative rainfall, 4) the day of year and 5) the percentage of clay content. The MAE was 22% for SNOC simulation and 23% for NOL simulation. High sensitivity to LAI suggests that the model may take into account field and sub-field spatial variability and support N management. Further studies are needed to fully validate the method, particularly in the case of NOL simulation.
Binary full adder, made of fusion gates, in a subexcitable Belousov-Zhabotinsky system
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew
2015-09-01
In an excitable thin-layer Belousov-Zhabotinsky (BZ) medium a localized perturbation leads to the formation of omnidirectional target or spiral waves of excitation. A subexcitable BZ medium responds to asymmetric local perturbation by producing traveling localized excitation wave-fragments, distant relatives of dissipative solitons. The size and life span of an excitation wave-fragment depend on the illumination level of the medium. Under the right conditions the wave-fragments conserve their shape and velocity vectors for extended time periods. I interpret the wave-fragments as values of Boolean variables. When two or more wave-fragments collide they annihilate or merge into a new wave-fragment. States of the logic variables, represented by the wave-fragments, are changed in the result of the collision between the wave-fragments. Thus, a logical gate is implemented. Several theoretical designs and experimental laboratory implementations of Boolean logic gates have been proposed in the past but little has been done cascading the gates into binary arithmetical circuits. I propose a unique design of a binary one-bit full adder based on a fusion gate. A fusion gate is a two-input three-output logical device which calculates the conjunction of the input variables and the conjunction of one input variable with the negation of another input variable. The gate is made of three channels: two channels cross each other at an angle, a third channel starts at the junction. The channels contain a BZ medium. When two excitation wave-fragments, traveling towards each other along input channels, collide at the junction they merge into a single wave-front traveling along the third channel. If there is just one wave-front in the input channel, the front continues its propagation undisturbed. I make a one-bit full adder by cascading two fusion gates. I show how to cascade the adder blocks into a many-bit full adder. I evaluate the feasibility of my designs by simulating the evolution of excitation in the gates and adders using the numerical integration of Oregonator equations.
UWB delay and multiply receiver
Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.
2013-09-10
An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.
Mbuthia, Jackson M; Rewe, Thomas O; Kahi, Alexander K
2015-02-01
A deterministic bio-economic model was developed and applied to evaluate biological and economic variables that characterize smallholder pig production systems in Kenya. Two pig production systems were considered namely, semi-intensive (SI) and extensive (EX). The input variables were categorized into biological variables including production and functional traits, nutritional variables, management variables and economic variables. The model factored the various sow physiological systems including gestation, farrowing, lactation, growth and development. The model was developed to evaluate a farrow to finish operation, but the results were customized to account for a farrow to weaner operation for a comparative analysis. The operations were defined as semi-intensive farrow to finish (SIFF), semi-intensive farrow to weaner (SIFW), extensive farrow to finish (EXFF) and extensive farrow to weaner (EXFW). In SI, the profits were the highest at KES. 74,268.20 per sow per year for SIFF against KES. 4026.12 for SIFW. The corresponding profits for EX were KES. 925.25 and KES. 626.73. Feed costs contributed the major part of the total costs accounting for 67.0, 50.7, 60.5 and 44.5 % in the SIFF, SIFW, EXFF and EXFW operations, respectively. The bio-economic model developed could be extended with modifications for use in deriving economic values for breeding goal traits for pigs under smallholder production systems in other parts of the tropics.
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
A Multivariate Analysis of the Early Dropout Process
ERIC Educational Resources Information Center
Fiester, Alan R.; Rudestam, Kjell E.
1975-01-01
Principal-component factor analyses were performed on patient input (demographic and pretherapy expectations), therapist input (demographic), and patient perspective therapy process variables that significantly differentiated early dropout from nondropout outpatients at two community mental health centers. (Author)
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Variable ratio regenerative braking device
Hoppie, Lyle O.
1981-12-15
Disclosed is a regenerative braking device (10) for an automotive vehicle. The device includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (36) and an output shaft (42), clutches (38, 46) and brakes (40, 48) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. The rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft is clutched to the transmission while the brake on the output shaft is applied, and are torsionally relaxed to deliver energy to the vehicle when the output shaft is clutched to the transmission while the brake on the input shaft is applied. The transmission ratio is varied to control the rate of energy accumulation and delivery for a given rotational speed of the vehicle drivetrain.
NASA Technical Reports Server (NTRS)
Chen, B. M.; Saber, A.
1993-01-01
A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.
NASA Astrophysics Data System (ADS)
Yadav, D.; Upadhyay, H. C.
1992-07-01
Vehicles obtain track-induced input through the wheels, which commonly number more than one. Analysis available for the vehicle response in a variable velocity run on a non-homogeneously profiled flexible track supported by compliant inertial foundation is for a linear heave model having a single ground input. This analysis is being extended to two point input models with heave-pitch and heave-roll degrees of freedom. Closed form expressions have been developed for the system response statistics. Results are presented for a railway coach and track/foundation problem, and the performances of heave, heave-pitch and heave-roll models have been compared. The three models are found to agree in describing the track response. However, the vehicle sprung mass behaviour is predicted to be different by these models, indicating the strong effect of coupling on the vehicle vibration.
Electrical Advantages of Dendritic Spines
Gulledge, Allan T.; Carnevale, Nicholas T.; Stuart, Greg J.
2012-01-01
Many neurons receive excitatory glutamatergic input almost exclusively onto dendritic spines. In the absence of spines, the amplitudes and kinetics of excitatory postsynaptic potentials (EPSPs) at the site of synaptic input are highly variable and depend on dendritic location. We hypothesized that dendritic spines standardize the local geometry at the site of synaptic input, thereby reducing location-dependent variability of local EPSP properties. We tested this hypothesis using computational models of simplified and morphologically realistic spiny neurons that allow direct comparison of EPSPs generated on spine heads with EPSPs generated on dendritic shafts at the same dendritic locations. In all morphologies tested, spines greatly reduced location-dependent variability of local EPSP amplitude and kinetics, while having minimal impact on EPSPs measured at the soma. Spine-dependent standardization of local EPSP properties persisted across a range of physiologically relevant spine neck resistances, and in models with variable neck resistances. By reducing the variability of local EPSPs, spines standardized synaptic activation of NMDA receptors and voltage-gated calcium channels. Furthermore, spines enhanced activation of NMDA receptors and facilitated the generation of NMDA spikes and axonal action potentials in response to synaptic input. Finally, we show that dynamic regulation of spine neck geometry can preserve local EPSP properties following plasticity-driven changes in synaptic strength, but is inefficient in modifying the amplitude of EPSPs in other cellular compartments. These observations suggest that one function of dendritic spines is to standardize local EPSP properties throughout the dendritic tree, thereby allowing neurons to use similar voltage-sensitive postsynaptic mechanisms at all dendritic locations. PMID:22532875
Predicting language outcomes for children learning AAC: Child and environmental factors
Brady, Nancy C.; Thiemann-Bourque, Kathy; Fleming, Kandace; Matthews, Kris
2014-01-01
Purpose To investigate a model of language development for nonverbal preschool age children learning to communicate with AAC. Method Ninety-three preschool children with intellectual disabilities were assessed at Time 1, and 82 of these children were assessed one year later at Time 2. The outcome variable was the number of different words the children produced (with speech, sign or SGD). Children’s intrinsic predictor for language was modeled as a latent variable consisting of cognitive development, comprehension, play, and nonverbal communication complexity. Adult input at school and home, and amount of AAC instruction were proposed mediators of vocabulary acquisition. Results A confirmatory factor analysis revealed that measures converged as a coherent construct and an SEM model indicated that the intrinsic child predictor construct predicted different words children produced. The amount of input received at home but not at school was a significant mediator. Conclusions Our hypothesized model accurately reflected a latent construct of Intrinsic Symbolic Factor (ISF). Children who evidenced higher initial levels of ISF and more adult input at home produced more words one year later. Findings support the need to assess multiple child variables, and suggest interventions directed to the indicators of ISF and input. PMID:23785187
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
To twist, roll, stroke or poke? A study of input devices for menu navigation in the cockpit.
Stanton, Neville A; Harvey, Catherine; Plant, Katherine L; Bolton, Luke
2013-01-01
Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of 'best' scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design. This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of 'best' scores.
Interacting with notebook input devices: an analysis of motor performance and users' expertise.
Sutter, Christine; Ziefle, Martina
2005-01-01
In the present study the usability of two different types of notebook input devices was examined. The independent variables were input device (touchpad vs. mini-joystick) and user expertise (expert vs. novice state). There were 30 participants, of whom 15 were touchpad experts and the other 15 were mini-joystick experts. The experimental tasks were a point-click task (Experiment 1) and a point-drag-drop task (Experiment 2). Dependent variables were the time and accuracy of cursor control. To assess carryover effects, we had the participants complete both experiments, using not only the input device for which they were experts but also the device for which they were novices. Results showed the touchpad performance to be clearly superior to mini-joystick performance. Overall, experts showed better performance than did novices. The significant interaction of input device and expertise showed that the use of an unknown device is difficult, but only for touchpad experts, who were remarkably slower and less accurate when using a mini-joystick. Actual and potential applications of this research include an evaluation of current notebook input devices. The outcomes allow ergonomic guidelines to be derived for optimized usage and design of the mini-joystick and touchpad devices.
NASA Technical Reports Server (NTRS)
1976-01-01
This methodology calculates the electric energy busbar cost from a utility-owned solar electric system. This approach is applicable to both publicly- and privately-owned utilities. Busbar cost represents the minimum price per unit of energy consistent with producing system-resultant revenues equal to the sum of system-resultant costs. This equality is expressed in present value terms, where the discount rate used reflects the rate of return required on invested capital. Major input variables describe the output capabilities and capital cost of the energy system, the cash flows required for system operation amd maintenance, and the financial structure and tax environment of the utility.
Chen, Dingjiang; Guo, Yi; Hu, Minpeng; Dahlgren, Randy A
2015-08-01
Legacy nitrogen (N) sources originating from anthropogenic N inputs (NANI) may be a major cause of increasing riverine N exports in many regions, despite a significant decline in NANI. However, little quantitative knowledge exists concerning the lag effect of NANI on riverine N export. As a result, the N leaching lag effect is not well represented in most current watershed models. This study developed a lagged variable model (LVM) to address temporally dynamic export of watershed NANI to rivers. Employing a Koyck transformation approach used in economic analyses, the LVM expresses the indefinite number of lag terms from previous years' NANI with a lag term that incorporates the previous year's riverine N flux, enabling us to inversely calibrate model parameters from measurable variables using Bayesian statistics. Applying the LVM to the upper Jiaojiang watershed in eastern China for 1980-2010 indicated that ~97% of riverine export of annual NANI occurred in the current year and succeeding 10 years (~11 years lag time) and ~72% of annual riverine N flux was derived from previous years' NANI. Existing NANI over the 1993-2010 period would have required a 22% reduction to attain the target TN level (1.0 mg N L(-1)), guiding watershed N source controls considering the lag effect. The LVM was developed with parsimony of model structure and parameters (only four parameters in this study); thus, it is easy to develop and apply in other watersheds. The LVM provides a simple and effective tool for quantifying the lag effect of anthropogenic N input on riverine export in support of efficient development and evaluation of watershed N control strategies.
Stochastic Modeling of the Environmental Impacts of the Mingtang Tunneling Project
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Li, Yandong; Chang, Ching-Fu; Chen, Ziyang; Tan, Benjamin Zhi Wen; Sege, Jon; Wang, Changhong; Rubin, Yoram
2017-04-01
This paper investigates the environmental impacts of a major tunneling project in China. Of particular interest is the drawdown of the water table, due to its potential impacts on ecosystem health and on agricultural activity. Due to scarcity of data, the study pursues a Bayesian stochastic approach, which is built around a numerical model. We adopted the Bayesian approach with the goal of deriving the posterior distributions of the dependent variables conditional on local data. The choice of the Bayesian approach for this study is somewhat non-trivial because of the scarcity of in-situ measurements. The thought guiding this selection is that prior distributions for the model input variables are valuable tools even if that all inputs are available, the Bayesian approach could provide a good starting point for further updates as and if additional data becomes available. To construct effective priors, a systematic approach was developed and implemented for constructing informative priors based on other, well-documented sites which bear geological and hydrological similarity to the target site (the Mingtang tunneling project). The approach is built around two classes of similarity criteria: a physically-based set of criteria and an additional set covering epistemic criteria. The prior construction strategy was implemented for the hydraulic conductivity of various types of rocks at the site (Granite and Gneiss) and for modeling the geometry and conductivity of the fault zones. Additional elements of our strategy include (1) modeling the water table through bounding surfaces representing upper and lower limits, and (2) modeling the effective conductivity as a random variable (varying between realizations, not in space). The approach was tested successfully against its ability to predict the tunnel infiltration fluxes and against observations of drying soils.
NASA Astrophysics Data System (ADS)
Milovančević, Miloš; Nikolić, Vlastimir; Anđelković, Boban
2017-01-01
Vibration-based structural health monitoring is widely recognized as an attractive strategy for early damage detection in civil structures. Vibration monitoring and prediction is important for any system since it can save many unpredictable behaviors of the system. If the vibration monitoring is properly managed, that can ensure economic and safe operations. Potentials for further improvement of vibration monitoring lie in the improvement of current control strategies. One of the options is the introduction of model predictive control. Multistep ahead predictive models of vibration are a starting point for creating a successful model predictive strategy. For the purpose of this article, predictive models of are created for vibration monitoring of planetary power transmissions in pellet mills. The models were developed using the novel method based on ANFIS (adaptive neuro fuzzy inference system). The aim of this study is to investigate the potential of ANFIS for selecting the most relevant variables for predictive models of vibration monitoring of pellet mills power transmission. The vibration data are collected by PIC (Programmable Interface Controller) microcontrollers. The goal of the predictive vibration monitoring of planetary power transmissions in pellet mills is to indicate deterioration in the vibration of the power transmissions before the actual failure occurs. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of vibration monitoring. It was also used to select the minimal input subset of variables from the initial set of input variables - current and lagged variables (up to 11 steps) of vibration. The obtained results could be used for simplification of predictive methods so as to avoid multiple input variables. It was preferable to used models with less inputs because of overfitting between training and testing data. While the obtained results are promising, further work is required in order to get results that could be directly applied in practice.
Using a Bayesian network to predict barrier island geomorphologic characteristics
Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron
2015-01-01
Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.
Symbolic PathFinder: Symbolic Execution of Java Bytecode
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Rungta, Neha
2010-01-01
Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.
Variability of African Farming Systems from Phenological Analysis of NDVI Time Series
NASA Technical Reports Server (NTRS)
Vrieling, Anton; deBeurs, K. M.; Brown, Molly E.
2011-01-01
Food security exists when people have access to sufficient, safe and nutritious food at all times to meet their dietary needs. The natural resource base is one of the many factors affecting food security. Its variability and decline creates problems for local food production. In this study we characterize for sub-Saharan Africa vegetation phenology and assess variability and trends of phenological indicators based on NDVI time series from 1982 to 2006. We focus on cumulated NDVI over the season (cumNDVI) which is a proxy for net primary productivity. Results are aggregated at the level of major farming systems, while determining also spatial variability within farming systems. High temporal variability of cumNDVI occurs in semiarid and subhumid regions. The results show a large area of positive cumNDVI trends between Senegal and South Sudan. These correspond to positive CRU rainfall trends found and relate to recovery after the 1980's droughts. We find significant negative cumNDVI trends near the south-coast of West Africa (Guinea coast) and in Tanzania. For each farming system, causes of change and variability are discussed based on available literature (Appendix A). Although food security comprises more than the local natural resource base, our results can perform an input for food security analysis by identifying zones of high variability or downward trends. Farming systems are found to be a useful level of analysis. Diversity and trends found within farming system boundaries underline that farming systems are dynamic.
NASA Astrophysics Data System (ADS)
Wang, Tingting; Sun, Fubao; Xia, Jun; Liu, Wenbin; Sang, Yanfang
2017-04-01
In predicting how droughts and hydrological cycles would change in a warming climate, change of atmospheric evaporative demand measured by pan evaporation (Epan) is one crucial element to be understood. Over the last decade, the derived partial differential (PD) form of the PenPan equation is a prevailing attribution approach to attributing changes to Epan worldwide. However, the independency among climatic variables required by the PD approach cannot be met using long term observations. Here we designed a series of numerical experiments to attribute changes of Epan over China by detrending each climatic variable, i.e., an experimental detrending approach, to address the inter-correlation among climate variables, and made comparison with the traditional PD method. The results show that the detrending approach is superior not only to a complicate system with multi-variables and mixing algorithm like aerodynamic component (Ep,A) and Epan, but also to a simple case like radiative component (Ep,R), when compared with traditional PD method. The major reason for this is the strong and significant inter-correlation of input meteorological forcing. Very similar and fine attributing results have been achieved based on detrending approach and PD method after eliminating the inter-correlation of input through a randomize approach. The contribution of Rh and Ta in net radiation and thus Ep,R, which has been overlooked based on the PD method but successfully detected by detrending approach, provides some explanation to the comparing results. We adopted the control run from the detrending approach and applied it to made adjustment of PD method. Much improvement has been made and thus proven this adjustment an effective way in attributing changes to Epan. Hence, the detrending approach and the adjusted PD method are well recommended in attributing changes in hydrological models to better understand and predict water and energy cycle.
Data envelopment analysis for estimating efficiency of intensive care units: a case study in Iran.
Bahrami, Mohammad Amin; Rafiei, Sima; Abedi, Mahdieh; Askari, Roohollah
2018-05-14
Purpose As hospitals are the most costly service providers in every healthcare systems, special attention should be given to their performance in terms of resource allocation and consumption. The purpose of this paper is to evaluate technical, allocative and economic efficiency in intensive care units (ICUs) of hospitals affiliated by Yazd University of Medical Sciences (YUMS) in 2015. Design/methodology/approach This was a descriptive, analytical study conducted in ICUs of seven training hospitals affiliated by YUMS using data envelopment analysis (DEA) in 2015. The number of physicians, nurses, active beds and equipment were regarded as input variables and bed occupancy rate, the number of discharged patients, economic information such as bed price and physicians' fees were mentioned as output variables of the study. Available data from study variables were retrospectively gathered and analyzed through the Deap 2.1 software using the variable returns to scale methodology. Findings The study findings revealed the average scores of allocative, economic, technical, managerial and scale efficiency to be relatively 0.956, 0.866, 0.883, 0.89 and 0.913. Regarding to latter three types of efficiency, five hospitals had desirable performance. Practical implications Given that additional costs due to an extra number of manpower or unnecessary capital resources impose economic pressure on hospitals also the fact that reduction of surplus production plays a major role in reducing such expenditures in hospitals, it is suggested that departments with low efficiency reduce their input surpluses to achieve the optimal level of performance. Originality/value The authors applied a DEA approach to measure allocative, economic, technical, managerial and scale efficiency of under-study hospitals. This is a helpful linear programming method which acts as a powerful and understandable approach for comparative performance assessment in healthcare settings and a guidance for healthcare managers to improve their departments' performance.
Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number
NASA Technical Reports Server (NTRS)
Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios
2016-01-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
Role of updraft velocity in temporal variability of global cloud hydrometeor number
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...
2016-05-16
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less
Role of updraft velocity in temporal variability of global cloud hydrometeor number
NASA Astrophysics Data System (ADS)
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios
2016-05-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
Mattsson, Jonathan; Hedström, Annelie; Ashley, Richard M; Viklander, Maria
2015-09-15
Ever since the advent of major sewer construction in the 1850s, the issue of increased solids deposition in sewers due to changes in domestic wastewater inputs has been frequently debated. Three recent changes considered here are the introduction of kitchen sink food waste disposers (FWDs); rising levels of inputs of fat, oil and grease (FOG); and the installation of low-flush toilets (LFTs). In this review these changes have been examined with regard to potential solids depositional impacts on sewer systems and the managerial implications. The review indicates that each of the changes has the potential to cause an increase in solids deposition in sewers and this is likely to be more pronounced for the upstream reaches of networks that serve fewer households than the downstream parts and for specific sewer features such as sags. The review has highlighted the importance of educational campaigns directed to the public to mitigate deposition as many of the observed problems have been linked to domestic behaviour in regard to FOGs, FWDs and toilet flushing. A standardized monitoring procedure of repeat sewer blockage locations can also be a means to identify depositional hot-spots. Interactions between the various changes in inputs in the studies reviewed here indicated an increased potential for blockage formation, but this would need to be further substantiated. As the precise nature of these changes in inputs have been found to be variable, depending on lifestyles and type of installation, the additional problems that may arise pose particular challenges to sewer operators and managers because of the difficulty in generalizing the nature of the changes, particularly where retrofitting projects in households are being considered. The three types of changes to inputs reviewed here highlight the need to consider whether or not more or less solid waste from households should be diverted into sewers. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sensor fusion methods for reducing false alarms in heart rate monitoring.
Borges, Gabriel; Brusamarello, Valner
2016-12-01
Automatic patient monitoring is an essential resource in hospitals for good health care management. While alarms caused by abnormal physiological conditions are important for the delivery of fast treatment, they can be also a source of unnecessary noise because of false alarms caused by electromagnetic interference or motion artifacts. One significant source of false alarms is related to heart rate, which is triggered when the heart rhythm of the patient is too fast or too slow. In this work, the fusion of different physiological sensors is explored in order to create a robust heart rate estimation. A set of algorithms using heart rate variability index, Bayesian inference, neural networks, fuzzy logic and majority voting is proposed to fuse the information from the electrocardiogram, arterial blood pressure and photoplethysmogram. Three kinds of information are extracted from each source, namely, heart rate variability, the heart rate difference between sensors and the spectral analysis of low and high noise of each sensor. This information is used as input to the algorithms. Twenty recordings selected from the MIMIC database were used to validate the system. The results showed that neural networks fusion had the best false alarm reduction of 92.5 %, while the Bayesian technique had a reduction of 84.3 %, fuzzy logic 80.6 %, majority voter 72.5 % and the heart rate variability index 67.5 %. Therefore, the proposed algorithms showed good performance and could be useful in bedside monitors.
Theofilatos, Athanasios; Yannis, George
2017-04-03
Understanding the various factors that affect accident risk is of particular concern to decision makers and researchers. The incorporation of real-time traffic and weather data constitutes a fruitful approach when analyzing accident risk. However, the vast majority of relevant research has no specific focus on vulnerable road users such as powered 2-wheelers (PTWs). Moreover, studies using data from urban roads and arterials are scarce. This study aims to add to the current knowledge by considering real-time traffic and weather data from 2 major urban arterials in the city of Athens, Greece, in order to estimate the effect of traffic, weather, and other characteristics on PTW accident involvement. Because of the high number of candidate variables, a random forest model was applied to reveal the most important variables. Then, the potentially significant variables were used as input to a Bayesian logistic regression model in order to reveal the magnitude of their effect on PTW accident involvement. The results of the analysis suggest that PTWs are more likely to be involved in multivehicle accidents than in single-vehicle accidents. It was also indicated that increased traffic flow and variations in speed have a significant influence on PTW accident involvement. On the other hand, weather characteristics were found to have no effect. The findings of this study can contribute to the understanding of accident mechanisms of PTWs and reduce PTW accident risk in urban arterials.
Capturing temporal and spatial variability in the chemistry of shallow permafrost ponds
NASA Astrophysics Data System (ADS)
Morison, Matthew Q.; Macrae, Merrin L.; Petrone, Richard M.; Fishback, LeeAnn
2017-12-01
Across the circumpolar north, the fate of small freshwater ponds and lakes (< 1 km2) has been the subject of scientific interest due to their ubiquity in the landscape, capacity to exchange carbon and energy with the atmosphere, and their potential to inform researchers about past climates through sediment records. A changing climate has implications for the capacity of ponds and lakes to support organisms and store carbon, which in turn has important feedbacks to climate change. Thus, an improved understanding of pond biogeochemistry is needed. To characterize spatial and temporal patterns in water column chemistry, a suite of tundra ponds were examined to answer the following research questions: (1) does temporal variability exceed spatial variability? (2) If temporal variability exists, do all ponds (or groups of ponds) behave in a similar temporal pattern, linked to seasonal hydrologic drivers or precipitation events? Six shallow ponds located in the Hudson Bay Lowlands region were monitored between May and October 2015 (inclusive, spanning the entire open-water period). The ponds span a range of biophysical conditions including pond area, perimeter, depth, and shoreline development. Water samples were collected regularly, both bimonthly over the ice-free season and intensively during and following a large summer storm event. Samples were analysed for nitrogen speciation (NO3-, NH4+, dissolved organic nitrogen) and major ions (Cl-, SO42-, K+, Ca2+, Mg2+, Na+). Across all ponds, temporal variability (across the season and within a single rain event) exceeded spatial variability (variation among ponds) in concentrations of several major species (Cl-, SO42-, K+, Ca2+, Na+). Evapoconcentration and dilution of pond water with precipitation and runoff inputs were the dominant processes influencing a set of chemical species which are hydrologically driven (Cl-, Na+, K+, Mg2+, dissolved organic nitrogen), whereas the dissolved inorganic nitrogen species were likely mediated by processes within ponds. This work demonstrates the importance of understanding hydrologically driven chemodynamics in permafrost ponds on multiple scales (seasonal and event scale).
Characterizing regional soil mineral composition using spectroscopyand geostatistics
Mulder, V.L.; de Bruin, S.; Weyermann, J.; Kokaly, Raymond F.; Schaepman, M.E.
2013-01-01
This work aims at improving the mapping of major mineral variability at regional scale using scale-dependent spatial variability observed in remote sensing data. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data and statistical methods were combined with laboratory-based mineral characterization of field samples to create maps of the distributions of clay, mica and carbonate minerals and their abundances. The Material Identification and Characterization Algorithm (MICA) was used to identify the spectrally-dominant minerals in field samples; these results were combined with ASTER data using multinomial logistic regression to map mineral distributions. X-ray diffraction (XRD)was used to quantify mineral composition in field samples. XRD results were combined with ASTER data using multiple linear regression to map mineral abundances. We testedwhether smoothing of the ASTER data to match the scale of variability of the target sample would improve model correlations. Smoothing was donewith Fixed Rank Kriging (FRK) to represent the mediumand long-range spatial variability in the ASTER data. Stronger correlations resulted using the smoothed data compared to results obtained with the original data. Highest model accuracies came from using both medium and long-range scaled ASTER data as input to the statistical models. High correlation coefficients were obtained for the abundances of calcite and mica (R2 = 0.71 and 0.70, respectively). Moderately-high correlation coefficients were found for smectite and kaolinite (R2 = 0.57 and 0.45, respectively). Maps of mineral distributions, obtained by relating ASTER data to MICA analysis of field samples, were found to characterize major soil mineral variability (overall accuracies for mica, smectite and kaolinite were 76%, 89% and 86% respectively). The results of this study suggest that the distributions of minerals and their abundances derived using FRK-smoothed ASTER data more closely match the spatial variability of soil and environmental properties at regional scale.
ERIC Educational Resources Information Center
Herr-Israel, Ellen; McCune, Lorraine
2011-01-01
In the period between sole use of single words and majority use of multiword utterances, children draw from their existing productive capability and conversational input to facilitate the eventual outcome of majority use of multiword utterances. During this period, children use word combinations that are not yet mature multiword utterances, termed…
ERIC Educational Resources Information Center
Coperthwaite, Corby A.; Knight, William E.
This study investigated the ability of student inputs, student involvements, and college environments to predict seven groups of academic majors. The research was conducted using a sample of college sophomores extracted from High School and Beyond 1982 follow-up cohort, N=43,614 (weighted). Among the findings of the hierarchical discriminant…
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Kantún-Manzano, C A; Herrera-Silveira, J A; Arcega-Cabrera, F
2018-01-01
The influence of coastal submarine groundwater discharges (SGD) on the distribution and abundance of seagrass meadows was investigated. In 2012, hydrological variability, nutrient variability in sediments and the biotic characteristics of two seagrass beds, one with SGD present and one without, were studied. Findings showed that SGD inputs were related with one dominant seagrass species. To further understand this, a generalized additive model (GAM) was used to explore the relationship between seagrass biomass and environment conditions (water and sediment variables). Salinity range (21-35.5 PSU) was the most influential variable (85%), explaining why H. wrightii was the sole plant species present at the SGD site. At the site without SGD, GAM could not be performed since environmental variables could not explain a total variance of > 60%. This research shows the relevance of monitoring SGD inputs in coastal karstic areas since they significantly affect biotic characteristics of seagrass beds.
Multiple-input multiple-output causal strategies for gene selection.
Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John
2011-11-25
Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.
Hammerstrom, Donald J.
2013-10-15
A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.
Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat
2015-01-01
The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.
Changes in Chesapeake Bay Hypoxia over the Past Century
NASA Astrophysics Data System (ADS)
Friedrichs, M. A.; Kaufman, D. E.; Najjar, R.; Tian, H.; Zhang, B.; Yao, Y.
2016-02-01
The Chesapeake Bay, one of the world's largest estuaries, is among the many coastal systems where hypoxia is a major concern and where dissolved oxygen thus represents a critical factor in determining the health of the Bay's ecosystem. Over the past century, the population of the Chesapeake Bay region has almost quadrupled, greatly modifying land cover and management practices within the watershed. Simultaneously, the Chesapeake Bay has been experiencing a high degree of climate change, including increases in temperature, precipitation, and precipitation intensity. Together, these changes have resulted in significantly increased riverine nutrient inputs to the Bay. In order to examine how interdecadal changes in riverine nitrogen input affects biogeochemical cycling and dissolved oxygen concentrations in Chesapeake Bay, a land-estuarine-ocean biogeochemical modeling system has been developed for this region. Riverine inputs of nitrogen to the Bay are computed from a terrestrial ecosystem model (the Dynamic Land Ecosystem Model; DLEM) that resolves riverine discharge variability on scales of days to years. This temporally varying discharge is then used as input to the estuarine-carbon-biogeochemical model embedded in the Regional Modeling System (ROMS), which provides estimates of the oxygen concentrations and nitrogen fluxes within the Bay as well as advective exports from the Bay to the adjacent Mid-Atlantic Bight shelf. Simulation results from this linked modeling system for the present (early 2000s) have been extensively evaluated with in situ and remotely sensed data. Longer-term simulations are used to isolate the effect of increased riverine nitrogen loading on dissolved oxygen concentrations and biogeochemical cycling within the Chesapeake Bay.
Insights on the Optical Properties of Estuarine DOM - Hydrological and Biological Influences.
Santos, Luísa; Pinto, António; Filipe, Olga; Cunha, Ângela; Santos, Eduarda B H; Almeida, Adelaide
2016-01-01
Dissolved organic matter (DOM) in estuaries derives from a diverse array of both allochthonous and autochthonous sources. In the estuarine system Ria de Aveiro (Portugal), the seasonality and the sources of the fraction of DOM that absorbs light (CDOM) were inferred using its optical and fluorescence properties. CDOM parameters known to be affected by aromaticity and molecular weight were correlated with physical, chemical and meteorological parameters. Two sites, representative of the marine and brackish water zones of the estuary, and with different hydrological characteristics, were regularly surveyed along two years, in order to determine the major influences on CDOM properties. Terrestrial-derived compounds are the predominant source of CDOM in the estuary during almost all the year and the two estuarine zones presented distinct amounts, as well as absorbance and fluorescence characteristics. Freshwater inputs have major influence on the dynamics of CDOM in the estuary, in particular at the brackish water zone, where accounted for approximately 60% of CDOM variability. With a lower magnitude, the biological productivity also impacted the optical properties of CDOM, explaining about 15% of its variability. Therefore, climate changes related to seasonal and inter-annual variations of the precipitation amounts might impact the dynamics of CDOM significantly, influencing its photochemistry and the microbiological activities in estuarine systems.
Insights on the Optical Properties of Estuarine DOM – Hydrological and Biological Influences
Santos, Luísa; Pinto, António; Filipe, Olga; Cunha, Ângela; Santos, Eduarda B. H.
2016-01-01
Dissolved organic matter (DOM) in estuaries derives from a diverse array of both allochthonous and autochthonous sources. In the estuarine system Ria de Aveiro (Portugal), the seasonality and the sources of the fraction of DOM that absorbs light (CDOM) were inferred using its optical and fluorescence properties. CDOM parameters known to be affected by aromaticity and molecular weight were correlated with physical, chemical and meteorological parameters. Two sites, representative of the marine and brackish water zones of the estuary, and with different hydrological characteristics, were regularly surveyed along two years, in order to determine the major influences on CDOM properties. Terrestrial-derived compounds are the predominant source of CDOM in the estuary during almost all the year and the two estuarine zones presented distinct amounts, as well as absorbance and fluorescence characteristics. Freshwater inputs have major influence on the dynamics of CDOM in the estuary, in particular at the brackish water zone, where accounted for approximately 60% of CDOM variability. With a lower magnitude, the biological productivity also impacted the optical properties of CDOM, explaining about 15% of its variability. Therefore, climate changes related to seasonal and inter-annual variations of the precipitation amounts might impact the dynamics of CDOM significantly, influencing its photochemistry and the microbiological activities in estuarine systems. PMID:27195702
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.
A Framework to Guide the Assessment of Human-Machine Systems.
Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo
2017-03-01
We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.
Creating a non-linear total sediment load formula using polynomial best subset regression model
NASA Astrophysics Data System (ADS)
Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali
2016-08-01
The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.
NASA Astrophysics Data System (ADS)
Flores, S. C.; Hill, T. M.; Russell, A. D.; Brooks, G.
2010-12-01
We are conducting investigations of calcareous benthic foraminifera acquired from Tomales Bay, California to reconstruct geochemical conditions of the bay for the past ~400 years, a time period of both natural and anthropogenic environmental change. Tomales Bay, located ~50km northwest of San Francisco, is a long (20.4 km), narrow (0.7 - 1.7 km) and shallow (2.0 - 6.0 m) bay that exhibits long-residence times and is stratified in the summer due to seasonal hypersalinity. Tomales Bay is a unique environment for climate and environmental change research because of the wide documented variability in carbonate parameters (pH, alkalinity, DIC) due to freshwater input from terrestrial sources that decreases aragonite and calcite saturation states. The historical record provided by benthic foraminiferal species and geochemistry, sedimentary carbon (TOC and TIC) analyses, and investigations of recent (Rose-Bengal stained) foraminifera are being utilized to constrain 3 major processes: 1) the range of temperature and salinity shifts over the past 400 years, 2) the relative dominance of marine- vs. fresh-water sources to the bay, and 3) the extent to which freshwater input and runoff may influence water chemistry (saturation state, Ω) with impacts on foraminiferal calcification. Four sediment cores were acquired in 2009 and 2010, and subsequently age-dated utilizing radiocarbon analyses (seven samples). Results indicate an increase in preservation of agglutinated versus calcareous foraminiferal tests (shells) since the mid-1900’s, and greater abundances of agglutinated tests found near freshwater sources. The major calcareous foraminifera present in the record include Elphidium hannai, Elphidium excavatum, Ammonia tepida, and Buccella frigida. Results from oxygen and carbon stable isotope analyses as well as total organic carbon (by weight) for all the cores will also be presented. These results will be compared to modern observations and instrumental records of temperature, salinity and pH variability to understand the context of historical changes compared to modern shifts due to human influence.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Evaluation of soil C-14 data for estimating inert organic matter in the RothC model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rethemeyer, J.; Grootes, P.M.; Brodowski, S.
2007-07-01
Changes in soil organic carbon stocks were simulated with the Rothamsted carbon (RothC) model. We evaluated the calculation of a major input variable, the amount of inert organic matter (IOM), using measurable data. Three different approaches for quantifying IOM were applied to soils with mainly recent organic matter and with carbon contribution from fossil fuels: 1) IOM estimation via total soil organic carbon (SOC); 2) through bulk soil radiocarbon and a mass balance; and 3) by quantifying the portion of black carbon via a specific marker. The results were highly variable in the soil containing lignite-derived carbon and ranged frommore » 8% to 52% inert carbon of total SOC, while nearly similar amounts of 5% to 8% were determined in the soil with mainly recent organic matter. We simulated carbon dynamics in both soils using the 3 approaches for quantifying IOM in combination with carbon inputs derived from measured crop yields. In the soil with recent organic matter, all approaches gave a nearly similar good agreement between measured and modeled data, while in the soil with a fossil carbon admixture, only the C-14 approach was successful in matching the measured data. Although C-14 was useful for initializing RothC, care should be taken when interpreting SOC dynamics in soils containing carbon from fossil fuels, since these reflect the contribution from both natural and anthropogenic carbon sources.« less
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
NASA Astrophysics Data System (ADS)
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
Detection, attribution, and sensitivity of trends toward earlier streamflow in the Sierra Nevada
Maurer, E.P.; Stewart, I.T.; Bonfils, Celine; Duffy, P.B.; Cayan, D.
2007-01-01
Observed changes in the timing of snowmelt dominated streamflow in the western United States are often linked to anthropogenic or other external causes. We assess whether observed streamflow timing changes can be statistically attributed to external forcing, or whether they still lie within the bounds of natural (internal) variability for four large Sierra Nevada (CA) basins, at inflow points to major reservoirs. Streamflow timing is measured by "center timing" (CT), the day when half the annual flow has passed a given point. We use a physically based hydrology model driven by meteorological input from a global climate model to quantify the natural variability in CT trends. Estimated 50-year trends in CT due to natural climate variability often exceed estimated actual CT trends from 1950 to 1999. Thus, although observed trends in CT to date may be statistically significant, they cannot yet be statistically attributed to external influences on climate. We estimate that projected CT changes at the four major reservoir inflows will, with 90% confidence, exceed those from natural variability within 1-4 decades or 4-8 decades, depending on rates of future greenhouse gas emissions. To identify areas most likely to exhibit CT changes in response to rising temperatures, we calculate changes in CT under temperature increases from 1 to 5??. We find that areas with average winter temperatures between -2??C and -4??C are most likely to respond with significant CT shifts. Correspondingly, elevations from 2000 to 2800 in are most sensitive to temperature increases, with CT changes exceeding 45 days (earlier) relative to 1961-1990. Copyright 2007 by the American Geophysical Union.
Chen, Xiaodong Phoenix; Sullivan, Amy M; Alseidi, Adnan; Kwakye, Gifty; Smink, Douglas S
Providing resident autonomy in the operating room (OR) is one of the major challenges for surgical educators today. The purpose of this study was to explore what approaches expert surgical teachers use to assess residents' readiness for autonomy in the OR. We particularly focused on the assessments that experts make prior to conducting the surgical time-out. We conducted semistructured in-depth interviews with expert surgical teachers from March 2016 to September 2016. Purposeful sampling and snowball sampling were applied to identify and recruit expert surgical teachers from general surgery residency programs across the United States to represent a range of clinical subspecialties. All interviews were audio-recorded, deidentified, and transcribed. We applied the Framework Method of content analysis, discussed and reached final consensus on the themes. We interviewed 15 expert teachers from 9 institutions. The majority (13/15) were Program or Associate Program Directors; 47% (7/15) primarily performed complex surgical operations (e.g., endocrine surgery). Five themes regarding how expert surgical teachers determine residents' readiness for OR autonomy before the surgical time-out emerged. These included 3 domains of evidence elicited about the resident (resident characteristics, medical knowledge, and beyond the current OR case), 1 variable relating to attending characteristics, and 1 variable composed of contextual factors. Experts obtained one or more examples of evidence, and adjusted residents' initial autonomy using factors from the attending variable and the context variable. Expert surgical teachers' assessments of residents' readiness for OR autonomy included 5 key components. Better understanding these inputs can contribute to both faculty and resident development, enabling increased resident autonomy and preparation for independent practice. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rahmati, Mehdi
2017-08-01
Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.
Chang, Heejun; Jung, Il-Won; Strecker, Angela L.; Wise, Daniel; Lafrenz, Martin; Shandas, Vivek; ,; Yeakley, Alan; Pan, Yangdong; Johnson, Gunnar; Psaris, Mike
2013-01-01
We investigated water resource vulnerability in the US portion of the Columbia River basin (CRB) using multiple indicators representing water supply, water demand, and water quality. Based on the US county scale, spatial analysis was conducted using various biophysical and socio-economic indicators that control water vulnerability. Water supply vulnerability and water demand vulnerability exhibited a similar spatial clustering of hotspots in areas where agricultural lands and variability of precipitation were high but dam storage capacity was low. The hotspots of water quality vulnerability were clustered around the main stem of the Columbia River where major population and agricultural centres are located. This multiple equal weight indicator approach confirmed that different drivers were associated with different vulnerability maps in the sub-basins of the CRB. Water quality variables are more important than water supply and water demand variables in the Willamette River basin, whereas water supply and demand variables are more important than water quality variables in the Upper Snake and Upper Columbia River basins. This result suggests that current water resources management and practices drive much of the vulnerability within the study area. The analysis suggests the need for increased coordination of water management across multiple levels of water governance to reduce water resource vulnerability in the CRB and a potentially different weighting scheme that explicitly takes into account the input of various water stakeholders.
Alaverdashvili, Mariam; Paterson, Phyllis G.; Bradley, Michael P.
2015-01-01
Background The rat photothrombotic stroke model can induce brain infarcts with reasonable biological variability. Nevertheless, we observed unexplained high inter-individual variability despite using a rigorous protocol. Of the three major determinants of infarct volume, photosensitive dye concentration and illumination period were strictly controlled, whereas undetected fluctuation in laser power output was suspected to account for the variability. New method The frequently utilized Diode Pumped Solid State (DPSS) lasers emitting 532 nm (green) light can exhibit fluctuations in output power due to temperature and input power alterations. The polarization properties of the Nd:YAG and Nd:YVO4 crystals commonly used in these lasers are another potential source of fluctuation, since one means of controlling output power uses a polarizer with a variable transmission axis. Thus, the properties of DPSS lasers and the relationship between power output and infarct size were explored. Results DPSS laser beam intensity showed considerable variation. Either a polarizer or a variable neutral density filter allowed adjustment of a polarized laser beam to the desired intensity. When the beam was unpolarized, the experimenter was restricted to using a variable neutral density filter. Comparison with existing method(s) Our refined approach includes continuous monitoring of DPSS laser intensity via beam sampling using a pellicle beamsplitter and photodiode sensor. This guarantees the desired beam intensity at the targeted brain area during stroke induction, with the intensity controlled either through a polarizer or variable neutral density filter. Conclusions Continuous monitoring and control of laser beam intensity is critical for ensuring consistent infarct size. PMID:25840363
How model and input uncertainty impact maize yield simulations in West Africa
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli
2015-02-01
Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.
Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy
2017-08-01
Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Eckley, Chris S; Branfireun, Brian
2009-08-01
This research focuses on mercury (Hg) mobilization in stormwater runoff from an urban roadway. The objectives were to determine: how the transport of surface-derived Hg changes during an event hydrograph; the influence of antecedent dry days on the runoff Hg load; the relationship between total suspended sediments (TSS) and Hg transport, and; the fate of new Hg input in rain and its relative importance to the runoff Hg load. Simulated rain events were used to control variables to elucidate transport processes and a Hg stable isotope was used to trace the fate of Hg inputs in rain. The results showed that Hg concentrations were highest at the beginning of the hydrograph and were predominantly particulate bound (HgP). On average, almost 50% of the total Hg load was transported during the first minutes of runoff, underscoring the importance of the initial runoff on load calculations. Hg accumulated on the road surface during dry periods resulting in the Hg runoff load increasing with antecedent dry days. The Hg concentrations in runoff were significantly correlated with TSS concentrations (mean r(2)=0.94+/-0.09). The results from the isotope experiments showed that the new Hg inputs quickly become associated with the surface particles and that the majority of Hg in runoff is derived from non-event surface-derived sources.
A review of surrogate models and their application to groundwater modeling
NASA Astrophysics Data System (ADS)
Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.
2015-08-01
The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.
Yu, Haitao; Dhingra, Rishi R; Dick, Thomas E; Galán, Roberto F
2017-01-01
Neural activity generally displays irregular firing patterns even in circuits with apparently regular outputs, such as motor pattern generators, in which the output frequency fluctuates randomly around a mean value. This "circuit noise" is inherited from the random firing of single neurons, which emerges from stochastic ion channel gating (channel noise), spontaneous neurotransmitter release, and its diffusion and binding to synaptic receptors. Here we demonstrate how to expand conductance-based network models that are originally deterministic to include realistic, physiological noise, focusing on stochastic ion channel gating. We illustrate this procedure with a well-established conductance-based model of the respiratory pattern generator, which allows us to investigate how channel noise affects neural dynamics at the circuit level and, in particular, to understand the relationship between the respiratory pattern and its breath-to-breath variability. We show that as the channel number increases, the duration of inspiration and expiration varies, and so does the coefficient of variation of the breath-to-breath interval, which attains a minimum when the mean duration of expiration slightly exceeds that of inspiration. For small channel numbers, the variability of the expiratory phase dominates over that of the inspiratory phase, and vice versa for large channel numbers. Among the four different cell types in the respiratory pattern generator, pacemaker cells exhibit the highest sensitivity to channel noise. The model shows that suppressing input from the pons leads to longer inspiratory phases, a reduction in breathing frequency, and larger breath-to-breath variability, whereas enhanced input from the raphe nucleus increases breathing frequency without changing its pattern. A major source of noise in neuronal circuits is the "flickering" of ion currents passing through the neurons' membranes (channel noise), which cannot be suppressed experimentally. Computational simulations are therefore the best way to investigate the effects of this physiological noise by manipulating its level at will. We investigate the role of noise in the respiratory pattern generator and show that endogenous, breath-to-breath variability is tightly linked to the respiratory pattern. Copyright © 2017 the American Physiological Society.
Sympathovagal imbalance in hyperthyroidism.
Burggraaf, J; Tulen, J H; Lalezari, S; Schoemaker, R C; De Meyer, P H; Meinders, A E; Cohen, A F; Pijl, H
2001-07-01
We assessed sympathovagal balance in thyrotoxicosis. Fourteen patients with Graves' hyperthyroidism were studied before and after 7 days of treatment with propranolol (40 mg 3 times a day) and in the euthyroid state. Data were compared with those obtained in a group of age-, sex-, and weight-matched controls. Autonomic inputs to the heart were assessed by power spectral analysis of heart rate variability. Systemic exposure to sympathetic neurohormones was estimated on the basis of 24-h urinary catecholamine excretion. The spectral power in the high-frequency domain was considerably reduced in hyperthyroid patients, indicating diminished vagal inputs to the heart. Increased heart rate and mid-frequency/high-frequency power ratio in the presence of reduced total spectral power and increased urinary catecholamine excretion strongly suggest enhanced sympathetic inputs in thyrotoxicosis. All abnormal features of autonomic balance were completely restored to normal in the euthyroid state. beta-Adrenoceptor antagonism reduced heart rate in hyperthyroid patients but did not significantly affect heart rate variability or catecholamine excretion. This is in keeping with the concept of a joint disruption of sympathetic and vagal inputs to the heart underlying changes in heart rate variability. Thus thyrotoxicosis is characterized by profound sympathovagal imbalance, brought about by increased sympathetic activity in the presence of diminished vagal tone.
Patterns of innervation of neurones in the inferior mesenteric ganglion of the cat.
Julé, Y; Krier, J; Szurszewski, J H
1983-01-01
The patterns of peripheral and central synaptic input to non-spontaneous, irregular discharging and regular discharging neurones in the inferior mesenteric ganglion of the cat were studied in vitro using intracellular recording techniques. All three types of neurones in rostral and caudal lobes received central synaptic input primarily from L3 and L4 spinal cord segments. Since irregular discharging neurones received synaptic input from intraganglionic regular discharging neurones, some of the central input to irregular discharging neurones may have been relayed through the regular discharging neurones. In the rostral lobes of the ganglion, more than 70% of the non-spontaneous and irregular discharging neurones tested received peripheral synaptic input from the lumbar colonic, intermesenteric and left and right hypogastric nerves. Most of the regular discharging neurones tested received synaptic input from the intermesenteric and lumbar colonic nerves; none of the regular discharging neurones received synaptic input from the hypogastric nerves. Some of the peripheral synaptic input from the lumbar colonic and intermesenteric nerves to irregular discharging neurones may have been relayed through the regular discharging neurones. Axons of non-spontaneous and irregular discharging neurones located in the rostral lobes travelled to the periphery exclusively in the lumbar colonic nerves. Antidromic responses were not observed in regular discharging neurones during stimulation of any of the major peripheral nerve trunks. This suggests these neurones were intraganglionic. In the caudal lobes, irregular discharging neurones received a similar pattern of peripheral synaptic input as did irregular discharging neurones located in the rostral lobes. The majority of irregular discharging neurones in the caudal lobes projected their axons to the periphery through the lumbar colonic nerves. Non-spontaneous neurones in the caudal lobes, in contrast to those located in the rostral lobes, received peripheral synaptic input primarily from the hypogastric nerves. Axons of the majority of non-spontaneous neurones located in the caudal lobes travelled to the periphery through hypogastric nerves. The results suggest that non-spontaneous neurones and irregular discharging neurones in the rostral lobes and the majority of irregular discharging neurones in the caudal lobes transact and integrate neural commands destined for abdominal viscera supplied by the lumbar colonic nerves. Non-spontaneous neurones in the caudal lobes transact and integrate neural commands destined for pelvic viscera supplied by the hypogastric nerves. PMID:6655582
Patterns of innervation of neurones in the inferior mesenteric ganglion of the cat.
Julé, Y; Krier, J; Szurszewski, J H
1983-11-01
The patterns of peripheral and central synaptic input to non-spontaneous, irregular discharging and regular discharging neurones in the inferior mesenteric ganglion of the cat were studied in vitro using intracellular recording techniques. All three types of neurones in rostral and caudal lobes received central synaptic input primarily from L3 and L4 spinal cord segments. Since irregular discharging neurones received synaptic input from intraganglionic regular discharging neurones, some of the central input to irregular discharging neurones may have been relayed through the regular discharging neurones. In the rostral lobes of the ganglion, more than 70% of the non-spontaneous and irregular discharging neurones tested received peripheral synaptic input from the lumbar colonic, intermesenteric and left and right hypogastric nerves. Most of the regular discharging neurones tested received synaptic input from the intermesenteric and lumbar colonic nerves; none of the regular discharging neurones received synaptic input from the hypogastric nerves. Some of the peripheral synaptic input from the lumbar colonic and intermesenteric nerves to irregular discharging neurones may have been relayed through the regular discharging neurones. Axons of non-spontaneous and irregular discharging neurones located in the rostral lobes travelled to the periphery exclusively in the lumbar colonic nerves. Antidromic responses were not observed in regular discharging neurones during stimulation of any of the major peripheral nerve trunks. This suggests these neurones were intraganglionic. In the caudal lobes, irregular discharging neurones received a similar pattern of peripheral synaptic input as did irregular discharging neurones located in the rostral lobes. The majority of irregular discharging neurones in the caudal lobes projected their axons to the periphery through the lumbar colonic nerves. Non-spontaneous neurones in the caudal lobes, in contrast to those located in the rostral lobes, received peripheral synaptic input primarily from the hypogastric nerves. Axons of the majority of non-spontaneous neurones located in the caudal lobes travelled to the periphery through hypogastric nerves. The results suggest that non-spontaneous neurones and irregular discharging neurones in the rostral lobes and the majority of irregular discharging neurones in the caudal lobes transact and integrate neural commands destined for abdominal viscera supplied by the lumbar colonic nerves. Non-spontaneous neurones in the caudal lobes transact and integrate neural commands destined for pelvic viscera supplied by the hypogastric nerves.
Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration
USDA-ARS?s Scientific Manuscript database
Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...
Assessment of input uncertainty by seasonally categorized latent variables using SWAT
USDA-ARS?s Scientific Manuscript database
Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...
Speaker Invariance for Phonetic Information: an fMRI Investigation
Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.
2012-01-01
The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714
The input and output management of solid waste using DEA models: A case study at Jengka, Pahang
NASA Astrophysics Data System (ADS)
Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah
2017-08-01
Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.
2010-08-15
The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less
Fuzzy Neuron: Method and Hardware Realization
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.
2014-01-01
This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.
Group interaction and flight crew performance
NASA Technical Reports Server (NTRS)
Foushee, H. Clayton; Helmreich, Robert L.
1988-01-01
The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.
Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste; Makhalanyane, Thulani P.; Jansson, Janet K.; Hopkins, David W.; Aspray, Thomas J.; Seely, Mary; Trindade, Marla I.; Cowan, Don A.
2016-01-01
The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more important than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall. PMID:27680878
Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste; Makhalanyane, Thulani P; Jansson, Janet K; Hopkins, David W; Aspray, Thomas J; Seely, Mary; Trindade, Marla I; Cowan, Don A
2016-09-29
The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more important than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO 2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall.
Stochastic analysis of multiphase flow in porous media: II. Numerical simulations
NASA Astrophysics Data System (ADS)
Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.
1996-08-01
The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
Simulating maize yield and biomass with spatial variability of soil field capacity
USDA-ARS?s Scientific Manuscript database
Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...
Simulated lumped-parameter system reduced-order adaptive control studies
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.
1981-01-01
Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.
NASA Technical Reports Server (NTRS)
Jian, B. J.; Shintani, T.; Emanuel, B. A.; Yates, B. J.
2002-01-01
The major goal of this study was to determine the patterns of convergence of non-labyrinthine inputs from the limbs and viscera onto vestibular nucleus neurons receiving signals from vertical semicircular canals or otolith organs. A secondary aim was to ascertain whether the effects of non-labyrinthine inputs on the activity of vestibular nucleus neurons is affected by bilateral peripheral vestibular lesions. The majority (72%) of vestibular nucleus neurons in labyrinth-intact animals whose firing was modulated by vertical rotations responded to electrical stimulation of limb and/or visceral nerves. The activity of even more vestibular nucleus neurons (93%) was affected by limb or visceral nerve stimulation in chronically labyrinthectomized preparations. Some neurons received non-labyrinthine inputs from a variety of peripheral sources, including antagonist muscles acting at the same joint, whereas others received inputs from more limited sources. There was no apparent relationship between the spatial and dynamic properties of a neuron's responses to tilts in vertical planes and the non-labyrinthine inputs that it received. These data suggest that non-labyrinthine inputs elicited during movement will modulate the processing of information by the central vestibular system, and may contribute to the recovery of spontaneous activity of vestibular nucleus neurons following peripheral vestibular lesions. Furthermore, some vestibular nucleus neurons with non-labyrinthine inputs may be activated only during particular behaviors that elicit a specific combination of limb and visceral inputs.
Jian, B J; Shintani, T; Emanuel, B A; Yates, B J
2002-05-01
The major goal of this study was to determine the patterns of convergence of non-labyrinthine inputs from the limbs and viscera onto vestibular nucleus neurons receiving signals from vertical semicircular canals or otolith organs. A secondary aim was to ascertain whether the effects of non-labyrinthine inputs on the activity of vestibular nucleus neurons is affected by bilateral peripheral vestibular lesions. The majority (72%) of vestibular nucleus neurons in labyrinth-intact animals whose firing was modulated by vertical rotations responded to electrical stimulation of limb and/or visceral nerves. The activity of even more vestibular nucleus neurons (93%) was affected by limb or visceral nerve stimulation in chronically labyrinthectomized preparations. Some neurons received non-labyrinthine inputs from a variety of peripheral sources, including antagonist muscles acting at the same joint, whereas others received inputs from more limited sources. There was no apparent relationship between the spatial and dynamic properties of a neuron's responses to tilts in vertical planes and the non-labyrinthine inputs that it received. These data suggest that non-labyrinthine inputs elicited during movement will modulate the processing of information by the central vestibular system, and may contribute to the recovery of spontaneous activity of vestibular nucleus neurons following peripheral vestibular lesions. Furthermore, some vestibular nucleus neurons with non-labyrinthine inputs may be activated only during particular behaviors that elicit a specific combination of limb and visceral inputs.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-06-01
Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).
Divide and control: split design of multi-input DNA logic gates.
Gerasimova, Yulia V; Kolpashchikov, Dmitry M
2015-01-18
Logic gates made of DNA have received significant attention as biocompatible building blocks for molecular circuits. The majority of DNA logic gates, however, are controlled by the minimum number of inputs: one, two or three. Here we report a strategy to design a multi-input logic gate by splitting a DNA construct.
Does Input Enhancement Work for Learning Politeness Strategies?
ERIC Educational Resources Information Center
Khatib, Mohammad; Safari, Mahmood
2013-01-01
The present study investigated the effect of input enhancement on the acquisition of English politeness strategies by intermediate EFL learners. Two groups of freshman English majors were randomly assigned to the experimental (enhanced input) group and the control (mere exposure) group. Initially, a TOEFL test and a discourse completion test (DCT)…
Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.
Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea
2017-05-01
Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.
Stochastic empirical loading and dilution model (SELDM) version 1.0.0
Granato, Gregory E.
2013-01-01
The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.
NASA Astrophysics Data System (ADS)
Yoon, J.; Zeng, N.; Mariotti, A.; Swenson, S.
2007-12-01
In an approach termed the P-E-R (or simply PER) method, we apply the basin water budget equation to diagnose the long-term variability of the total terrestrial water storage (TWS). The key input variables are observed precipitation (P) and runoff (R), and estimated evaporation (E). Unlike typical offline land-surface model estimate where only atmospheric variables are used as input, the direct use of observed runoff in the PER method imposes an important constraint on the diagnosed TWS. Although there lack basin-scale observations of evaporation, the tendency of E to have significantly less variability than the difference between precipitation and runoff (P-R) minimizes the uncertainties originating from estimated evaporation. Compared to the more traditional method using atmospheric moisture convergence (MC) minus R (MCR method), the use of observed precipitation in PER method is expected to lead to general improvement, especially in regions atmospheric radiosonde data are too sparse to constrain the atmospheric model analyzed MC such as in the remote tropics. TWS was diagnosed using the PER method for the Amazon (1970-2006) and the Mississippi Basin (1928-2006), and compared with MCR method, land-surface model and reanalyses, and NASA's GRACE satellite gravity data. The seasonal cycle of diagnosed TWS over the Amazon is about 300 mm. The interannual TWS variability in these two basins are 100-200 mm, but multi-dacadal changes can be as large as 600-800 mm. Major droughts such as the Dust Bowl period had large impact with water storage depleted by 500 mm over a decade. Within the short period 2003-2006 when GRACE data was available, PER and GRACE show good agreement both for seasonal cycle and interannual variability, providing potential to cross-validate each other. In contrast, land-surface model results are significantly smaller than PER and GRACE, especially towards longer timescales. While we currently lack independent means to verify these long-term changes, simple error analysis using 3 precipitation datasets and 3 evaporation estimates suggest that the multi-decadal amplitude can be uncertain up to a factor of 2, while the agreement is high on interannual timescales. The large TWS variability implies the remarkable capacity of land-surface in storing and taking up water that may be under-represented in models. The results also suggest the existence of water storage memories on multi-year time scales, significantly longer than typically assumed seasonal timescales associated with surface soil moisture.
Valiela, Ivan; Elmstrom, Elizabeth; Lloret, Javier; Stone, Thomas; Camilli, Luis
2018-07-15
We review data from coastal Pacific Panama and other tropical coasts with two aims. First, we defined inputs and losses of nitrogen (N) mediating connectivity of watersheds, mangrove estuaries, and coastal sea. N entering watersheds-mainly via N fixation (79-86%)-was largely intercepted; N discharges to mangrove estuaries (3-6%), small compared to N inputs to watersheds, nonetheless significantly supplied N to mangrove estuaries. Inputs to mangrove estuaries (including watershed discharges, and marine inputs during flood tides) were matched by losses (mainly denitrification and export during ebb tides). Mangrove estuary subsidies of coastal marine food webs take place by export of forms of N [DON (62.5%), PN (9.1%), and litter N (12.9%)] that provide dissimilative and assimilative subsidies. N fixation, denitrification, and tidal exchanges were major processes, and DON was major form of N involved in connecting fluxes in and out of mangrove estuaries. Second, we assessed effects of watershed forest cover on connectivity. Decreased watershed forest cover lowered N inputs, interception, and discharge into receiving mangrove estuaries. These imprints of forest cover were erased during transit of N through estuaries, owing to internal N cycle transformations, and differences in relative area of watersheds and estuaries. Largest losses of N consisted of water transport of energy-rich compounds, particularly DON. N losses were similar in magnitude to N inputs from sea, calculated without considering contribution by intermittent coastal upwelling, and hence likely under-estimated. Pacific Panama mangrove estuaries are exposed to major inputs of N from land and sea, which emphasizes the high degree of bi-directional connectivity in these coupled ecosystems. Pacific Panama is still lightly affected by human or global changes. Increased deforestation can be expected, as well as changes in ENSO, which will surely raise watershed-derived loads of N, as well as significantly change marine N inputs affecting coastal coupled ecosystems. Copyright © 2018 Elsevier B.V. All rights reserved.
Kovač, Marko; Bauer, Arthur; Ståhl, Göran
2014-01-01
Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120
Evaluating variable rate fungicide applications for control of Sclerotinia
USDA-ARS?s Scientific Manuscript database
Oklahoma peanut growers continue to try to increase yields and reduce input costs. Perhaps the largest input in a peanut crop is fungicide applications. This is especially true for areas in the state that have high disease pressure from Sclerotinia. On average, a single fungicide application cost...
Human encroachment on the coastal zone has led to a rise in the delivery of nitrogen (N) to estuarine and near-shore waters. Potential routes of anthropogenic N inputs include export from estuaries, atmospheric deposition, and dissolved N inputs from groundwater outflow. Stable...
Learning a Novel Pattern through Balanced and Skewed Input
ERIC Educational Resources Information Center
McDonough, Kim; Trofimovich, Pavel
2013-01-01
This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…
Code of Federal Regulations, 2014 CFR
2014-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2013 CFR
2013-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2012 CFR
2012-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2011 CFR
2011-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Demand Side Variability, and Network Variability studies, including input data, processing programs, and... should include the product or product groups carried under each listed contract; (k) Spreadsheets and...
Investigation of energy management strategies for photovoltaic systems - An analysis technique
NASA Technical Reports Server (NTRS)
Cull, R. C.; Eltimsahy, A. H.
1982-01-01
Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.
Investigation of energy management strategies for photovoltaic systems - An analysis technique
NASA Astrophysics Data System (ADS)
Cull, R. C.; Eltimsahy, A. H.
Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.
Troyer, T W; Miller, K D
1997-07-01
To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory 1/square root of N and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery, 1/square root of N arguments hold and spiking is regular; after the "memory" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.
Analysis on electronic control unit of continuously variable transmission
NASA Astrophysics Data System (ADS)
Cao, Shuanggui
Continuously variable transmission system can ensure that the engine work along the line of best fuel economy, improve fuel economy, save fuel and reduce harmful gas emissions. At the same time, continuously variable transmission allows the vehicle speed is more smooth and improves the ride comfort. Although the CVT technology has made great development, but there are many shortcomings in the CVT. The CVT system of ordinary vehicles now is still low efficiency, poor starting performance, low transmission power, and is not ideal controlling, high cost and other issues. Therefore, many scholars began to study some new type of continuously variable transmission. The transmission system with electronic systems control can achieve automatic control of power transmission, give full play to the characteristics of the engine to achieve optimal control of powertrain, so the vehicle is always traveling around the best condition. Electronic control unit is composed of the core processor, input and output circuit module and other auxiliary circuit module. Input module collects and process many signals sent by sensor and , such as throttle angle, brake signals, engine speed signal, speed signal of input and output shaft of transmission, manual shift signals, mode selection signals, gear position signal and the speed ratio signal, so as to provide its corresponding processing for the controller core.
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Spatial patterns of throughfall isotopic composition at the event and seasonal timescales
Scott T. Allen; Richard F. Keim; Jeffrey J. McDonnell
2015-01-01
Spatial variability of throughfall isotopic composition in forests is indicative of complex processes occurring in the canopy and remains insufficiently understood to properly characterize precipitation inputs to the catchment water balance. Here we investigate variability of throughfall isotopic composition with the objectives: (1) to quantify the spatial variability...
NASA Astrophysics Data System (ADS)
Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.
2017-12-01
Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.
NASA Astrophysics Data System (ADS)
Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.
2016-09-01
Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.
Method Accelerates Training Of Some Neural Networks
NASA Technical Reports Server (NTRS)
Shelton, Robert O.
1992-01-01
Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.
End-member modelling as a tool for climate reconstruction-An Eastern Mediterranean case study.
Beuscher, Sarah; Krüger, Stefan; Ehrmann, Werner; Schmiedl, Gerhard; Milker, Yvonne; Arz, Helge; Schulz, Hartmut
2017-01-01
The Eastern Mediterranean Sea is a sink for terrigenous sediments from North Africa, Europe and Asia Minor. Its sediments therefore provide valuable information on the climate dynamics in the source areas and the associated transport processes. We present a high-resolution dataset of sediment core M40/4_SL71, which was collected SW of Crete and spans the last ca. 180 kyr. We analysed the clay mineral composition, the grain size distribution within the silt fraction, and the abundance of major and trace elements. We tested the potential of end-member modelling on these sedimentological datasets as a tool for reconstructing the climate variability in the source regions and the associated detrital input. For each dataset, we modelled three end members. All end members were assigned to a specific provenance and sedimentary process. In total, three end members were related to the Saharan dust input, and five were related to the fluvial sediment input. One end member was strongly associated with the sapropel layers. The Saharan dust end members of the grain size and clay mineral datasets generally suggest enhanced dust export into the Eastern Mediterranean Sea during the dry phases with short-term increases during Heinrich events. During the African Humid Periods, dust export was reduced but may not have completely ceased. The loading patterns of two fluvial end members show a strong relationship with the Northern Hemisphere insolation, and all fluvial end members document enhanced input during the African Humid Periods. The sapropel end member most likely reflects the fixation of redox-sensitive elements within the anoxic sapropel layers. Our results exemplify that end-member modelling is a valuable tool for interpreting extensive and multidisciplinary datasets.
NASA Astrophysics Data System (ADS)
Roselli, Leonilde; Fabbrocini, Adele; Manzo, Cristina; D'Adamo, Raffaele
2009-10-01
The dynamics of the Lesina coastal lagoon (Italy) in terms of nutrients, phytoplankton and chemical-physical parameters were evaluated, together with their functional relationships with freshwater inputs, in order to identify ecosystem responses to changes in driving forces in a Mediterranean non-tidal lentic environment. Lesina Lagoon is a shallow coastal environment characterised by limited exchange with coastal waters, which favours enrichment of nutrients and organic matter and benthic fluxes within the system. Lagoon-sea exchanges are influenced by human management. There is a steep salinity gradient from East to West. High nitrogen and silica values were found close to freshwater inputs, indicating wastewater discharges and agricultural runoff, especially in winter. Dissolved oxygen was well below saturation (65%) near sewage and runoff inputs in the western part of the lagoon during summer. Classification in accordance with EEA (2001) guidelines suggests the system is of "poor" or "bad" quality in terms of nitrogen concentrations in the eastern zone during the winter rainy period. In terms of phosphate concentrations, the majority of the stations fall into the "good" category, with only two stations (close to the sewage and runoff inputs) classed as "bad". In both cases, the raw nitrogen levels make the lagoon a P-limited system, especially in the eastern part. There was wide space-time variability in chlorophyll a concentrations, which ranged from 0.25 to 56 μg l -1. No relationships between chlorophyll a and nutrients were found, suggesting that autotrophic biomass may be controlled by a large number of internal and external forcing factors driving eutrophication processes. Water quality for this type of environment depends heavily on pressure from human activities but also on the management of sewage treatment plants, agricultural practices and the channels connecting the lagoon with the sea.
Moreno-González, R; Rodríguez-Mozaz, S; Gros, M; Pérez-Cánovas, E; Barceló, D; León, V M
2014-08-15
The seasonal occurrence and distribution of 69 pharmaceuticals along coastal watercourses during 6 sampling campaigns and their input through El Albujón watercourse to the Mar Menor lagoon were determined by UPLC-MS-MS, considering a total of 115 water samples. The major source of pharmaceuticals running into this watercourse was an effluent from the Los Alcazares WWTP, although other sources were also present (runoffs, excess water from irrigation, etc.). In this urban and agriculturally influenced watercourse different pharmaceutical distribution profiles were detected according to their attenuation, which depended on physicochemical water conditions, pollutant input variation, biodegradation and photodegradation rates of pollutants, etc. The less recalcitrant compounds in this study (macrolides, β-blockers, etc.) showed a relevant seasonal variability as a consequence of dissipation processes (degradation, sorption, etc.). Attenuation was lower, however, for diclofenac, carbamazepine, lorazepam, valsartan, sulfamethoxazole among others, due to their known lower degradability and sorption onto particulate matter, according to previous studies. The maximum concentrations detected were higher than 1000 ng L(-1) for azithromycin, clarithromycin, valsartan, acetaminophen and ibuprofen. These high concentration levels were favored by the limited dilution in this low flow system, and consequently some of them could pose an acute risk to the biota of this watercourse. Considering data from 2009 to 2010, it has been estimated that a total of 11.3 kg of pharmaceuticals access the Mar Menor lagoon annually through the El Albujón watercourse. The highest proportion of this input corresponded to antibiotics (46%), followed by antihypertensives (20%) and diuretics (18%). Copyright © 2014 Elsevier B.V. All rights reserved.
End-member modelling as a tool for climate reconstruction—An Eastern Mediterranean case study
Krüger, Stefan; Ehrmann, Werner; Schmiedl, Gerhard; Milker, Yvonne; Arz, Helge; Schulz, Hartmut
2017-01-01
The Eastern Mediterranean Sea is a sink for terrigenous sediments from North Africa, Europe and Asia Minor. Its sediments therefore provide valuable information on the climate dynamics in the source areas and the associated transport processes. We present a high-resolution dataset of sediment core M40/4_SL71, which was collected SW of Crete and spans the last ca. 180 kyr. We analysed the clay mineral composition, the grain size distribution within the silt fraction, and the abundance of major and trace elements. We tested the potential of end-member modelling on these sedimentological datasets as a tool for reconstructing the climate variability in the source regions and the associated detrital input. For each dataset, we modelled three end members. All end members were assigned to a specific provenance and sedimentary process. In total, three end members were related to the Saharan dust input, and five were related to the fluvial sediment input. One end member was strongly associated with the sapropel layers. The Saharan dust end members of the grain size and clay mineral datasets generally suggest enhanced dust export into the Eastern Mediterranean Sea during the dry phases with short-term increases during Heinrich events. During the African Humid Periods, dust export was reduced but may not have completely ceased. The loading patterns of two fluvial end members show a strong relationship with the Northern Hemisphere insolation, and all fluvial end members document enhanced input during the African Humid Periods. The sapropel end member most likely reflects the fixation of redox-sensitive elements within the anoxic sapropel layers. Our results exemplify that end-member modelling is a valuable tool for interpreting extensive and multidisciplinary datasets. PMID:28934332
MODELING OF HUMAN EXPOSURE TO IN-VEHICLE PM2.5 FROM ENVIRONMENTAL TOBACCO SMOKE
Cao, Ye; Frey, H. Christopher
2012-01-01
Environmental tobacco smoke (ETS) is estimated to be a significant contributor to in-vehicle human exposure to fine particulate matter of 2.5 µm or smaller (PM2.5). A critical assessment was conducted of a mass balance model for estimating PM2.5 concentration with smoking in a motor vehicle. Recommendations for the range of inputs to the mass-balance model are given based on literature review. Sensitivity analysis was used to determine which inputs should be prioritized for data collection. Air exchange rate (ACH) and the deposition rate have wider relative ranges of variation than other inputs, representing inter-individual variability in operations, and inter-vehicle variability in performance, respectively. Cigarette smoking and emission rates, and vehicle interior volume, are also key inputs. The in-vehicle ETS mass balance model was incorporated into the Stochastic Human Exposure and Dose Simulation for Particulate Matter (SHEDS-PM) model to quantify the potential magnitude and variability of in-vehicle exposures to ETS. The in-vehicle exposure also takes into account near-road incremental PM2.5 concentration from on-road emissions. Results of probabilistic study indicate that ETS is a key contributor to the in-vehicle average and high-end exposure. Factors that mitigate in-vehicle ambient PM2.5 exposure lead to higher in-vehicle ETS exposure, and vice versa. PMID:23060732
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
Propagation of variability in railway dynamic simulations: application to virtual homologation
NASA Astrophysics Data System (ADS)
Funfschilling, Christine; Perrin, Guillaume; Kraft, Sönke
2012-01-01
Railway dynamic simulations are increasingly used to predict and analyse the behaviour of the vehicle and of the track during their whole life cycle. Up to now however, no simulation has been used in the certification procedure even if the expected benefits are important: cheaper and shorter procedures, more objectivity, better knowledge of the behaviour around critical situations. Deterministic simulations are nevertheless too poor to represent the whole physical of the track/vehicle system which contains several sources of variability: variability of the mechanical parameters of a train among a class of vehicles (mass, stiffness and damping of different suspensions), variability of the contact parameters (friction coefficient, wheel and rail profiles) and variability of the track design and quality. This variability plays an important role on the safety, on the ride quality, and thus on the certification criteria. When using the simulation for certification purposes, it seems therefore crucial to take into account the variability of the different inputs. The main goal of this article is thus to propose a method to introduce the variability in railway dynamics. A four-step method is described namely the definition of the stochastic problem, the modelling of the inputs variability, the propagation and the analysis of the output. Each step is illustrated with railway examples.
Roles for Coincidence Detection in Coding Amplitude-Modulated Sounds
Ashida, Go; Kretzberg, Jutta; Tollin, Daniel J.
2016-01-01
Many sensory neurons encode temporal information by detecting coincident arrivals of synaptic inputs. In the mammalian auditory brainstem, binaural neurons of the medial superior olive (MSO) are known to act as coincidence detectors, whereas in the lateral superior olive (LSO) roles of coincidence detection have remained unclear. LSO neurons receive excitatory and inhibitory inputs driven by ipsilateral and contralateral acoustic stimuli, respectively, and vary their output spike rates according to interaural level differences. In addition, LSO neurons are also sensitive to binaural phase differences of low-frequency tones and envelopes of amplitude-modulated (AM) sounds. Previous physiological recordings in vivo found considerable variations in monaural AM-tuning across neurons. To investigate the underlying mechanisms of the observed temporal tuning properties of LSO and their sources of variability, we used a simple coincidence counting model and examined how specific parameters of coincidence detection affect monaural and binaural AM coding. Spike rates and phase-locking of evoked excitatory and spontaneous inhibitory inputs had only minor effects on LSO output to monaural AM inputs. In contrast, the coincidence threshold of the model neuron affected both the overall spike rates and the half-peak positions of the AM-tuning curve, whereas the width of the coincidence window merely influenced the output spike rates. The duration of the refractory period affected only the low-frequency portion of the monaural AM-tuning curve. Unlike monaural AM coding, temporal factors, such as the coincidence window and the effective duration of inhibition, played a major role in determining the trough positions of simulated binaural phase-response curves. In addition, empirically-observed level-dependence of binaural phase-coding was reproduced in the framework of our minimalistic coincidence counting model. These modeling results suggest that coincidence detection of excitatory and inhibitory synaptic inputs is essential for LSO neurons to encode both monaural and binaural AM sounds. PMID:27322612
Proposal for nanoscale cascaded plasmonic majority gates for non-Boolean computation.
Dutta, Sourav; Zografos, Odysseas; Gurunarayanan, Surya; Radu, Iuliana; Soree, Bart; Catthoor, Francky; Naeemi, Azad
2017-12-19
Surface-plasmon-polariton waves propagating at the interface between a metal and a dielectric, hold the key to future high-bandwidth, dense on-chip integrated logic circuits overcoming the diffraction limitation of photonics. While recent advances in plasmonic logic have witnessed the demonstration of basic and universal logic gates, these CMOS oriented digital logic gates cannot fully utilize the expressive power of this novel technology. Here, we aim at unraveling the true potential of plasmonics by exploiting an enhanced native functionality - the majority voter. Contrary to the state-of-the-art plasmonic logic devices, we use the phase of the wave instead of the intensity as the state or computational variable. We propose and demonstrate, via numerical simulations, a comprehensive scheme for building a nanoscale cascadable plasmonic majority logic gate along with a novel referencing scheme that can directly translate the information encoded in the amplitude and phase of the wave into electric field intensity at the output. Our MIM-based 3-input majority gate displays a highly improved overall area of only 0.636 μm 2 for a single-stage compared with previous works on plasmonic logic. The proposed device demonstrates non-Boolean computational capability and can find direct utility in highly parallel real-time signal processing applications like pattern recognition.
African crop yield reductions due to increasingly unbalanced Nitrogen and Phosphorus consumption
NASA Astrophysics Data System (ADS)
van der Velde, Marijn; Folberth, Christian; Balkovič, Juraj; Ciais, Philippe; Fritz, Steffen; Janssens, Ivan A.; Obersteiner, Michael; See, Linda; Skalský, Rastislav; Xiong, Wei; Peñuealas, Josep
2014-05-01
The impact of soil nutrient depletion on crop production has been known for decades, but robust assessments of the impact of increasingly unbalanced nitrogen (N) and phosphorus (P) application rates on crop production are lacking. Here, we use crop response functions based on 741 FAO maize crop trials and EPIC crop modeling across Africa to examine maize yield deficits resulting from unbalanced N:P applications under low, medium, and high input scenarios, for past (1975), current, and future N:P mass ratios of respectively, 1:0.29, 1:0.15, and 1:0.05. At low N inputs (10 kg/ha), current yield deficits amount to 10% but will increase up to 27% under the assumed future N:P ratio, while at medium N inputs (50 kg N/ha), future yield losses could amount to over 40%. The EPIC crop model was then used to simulate maize yields across Africa. The model results showed relative median future yield reductions at low N inputs of 40%, and 50% at medium and high inputs, albeit with large spatial variability. Dominant low-quality soils such as Ferralsols, which are strongly adsorbing P, and Arenosols with a low nutrient retention capacity, are associated with a strong yield decline, although Arenosols show very variable crop yield losses at low inputs. Optimal N:P ratios, i.e. those where the lowest amount of applied P produces the highest yield (given N input) where calculated with EPIC to be as low as 1:0.5. Finally, we estimated the additional P required given current N inputs, and given N inputs that would allow Africa to close yield gaps (ca. 70%). At current N inputs, P consumption would have to increase 2.3-fold to be optimal, and to increase 11.7-fold to close yield gaps. The P demand to overcome these yield deficits would provide a significant additional pressure on current global extraction of P resources.
NASA Astrophysics Data System (ADS)
Wainger, Lisa; Yu, Hao; Gazenski, Kim; Boynton, Walter
2016-09-01
A major question in restoring estuarine water quality is whether local actions to manage excess nutrients can be effective, given that estuaries are also responding to tidal inputs from adjacent water bodies. Several types of statistical analysis were used to examine spatially-detailed and long-term water quality monitoring data in eight sub-estuaries of Chesapeake Bay. These sub-estuaries are likely to be similar to other shallow systems with moderate to long water residence times. Statistical cluster analysis of spatial water quality data suggested that estuaries had spatially distinct water quality zones and that the peak algal biomass (as measured by chlorophyll-a) was most often controlled by local watershed inputs in all but one estuary, although mainstem inputs affected most estuaries at some times and places. An elasticity indicator that compared inter-annual changes in sub-estuaries to parallel changes in the mainstem Chesapeake Bay supported the idea that water quality in sub-estuaries was not strongly coupled to the mainstem. A cross-channel zonation of water quality observed near the mouth of estuaries suggested that Bay influences were stronger on the right side of the lower channel (looking up estuary) at times in all estuaries, and was most common in small estuaries closest to the mouth of the primary water source to the estuary. Where Bay influences were strong, estuarine water quality would be expected to be less responsive to nutrient reductions made in the local watershed. Regression analysis was used to evaluate hypothesized relationships between environmental driver variables and average chlorophyll-a (chl-a) concentrations. Chl-a values were calculated from unusually detailed levels of spatial sampling, potentially providing a more comprehensive view of system conditions than that provided by traditional sparse sampling networks. The univariate models with the best data support to explain variability in averaged chl-a concentration were those that reflected water residence time. Of the land cover variables tested, septic density in the riparian zone explained the most variance in chl-a. The multivariate models that most improved upon the residence time effect added TN or TP flows (normalized by volume) and suggested that chl-a will be less responsive to nutrient reductions in estuaries that are poorly flushed.
NASA Astrophysics Data System (ADS)
Pozzi, W.; Fekete, B.; Piasecki, M.; McGuinness, D.; Fox, P.; Lawford, R.; Vorosmarty, C.; Houser, P.; Imam, B.
2008-12-01
The inadequacies of water cycle observations for monitoring long-term changes in the global water system, as well as their feedback into the climate system, poses a major constraint on sustainable development of water resources and improvement of water management practices. Hence, The Group on Earth Observations (GEO) has established Task WA-08-01, "Integration of in situ and satellite data for water cycle monitoring," an integrative initiative combining different types of satellite and in situ observations related to key variables of the water cycle with model outputs for improved accuracy and global coverage. This presentation proposes development of the Rapid, Integrated Monitoring System for the Water Cycle (Global-RIMS)--already employed by the GEO Global Terrestrial Network for Hydrology (GTN-H)--as either one of the main components or linked with the Asian system to constitute the modeling system of GEOSS for water cycle monitoring. We further propose expanded, augmented capability to run multiple grids to embrace some of the heterogeneous methods and formats of the Earth Science, Hydrology, and Hydraulic Engineering communities. Different methodologies are employed by the Earth Science (land surface modeling), the Hydrological (GIS), and the Hydraulic Engineering Communities; with each community employing models that require different input data. Data will be routed as input variables to the models through web services, allowing satellite and in situ data to be integrated together within the modeling framework. Semantic data integration will provide the automation to enable this system to operate in near-real-time. Multiple data collections for ground water, precipitation, soil moisture satellite data, such as SMAP, and lake data will require multiple low level ontologies, and an upper level ontology will permit user-friendly water management knowledge to be synthesized. These ontologies will have to have overlapping terms mapped and linked together. so that they can cover an even wider net of data sources. The goal is to develop the means to link together the upper level and lower level ontologies and to have these registered within the GEOSS Registry. Actual operational ontologies that would link to models or link to data collections containing input variables required by models would have to be nested underneath this top level ontology, analogous to the mapping that has been carried out among ontologies within GEON.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
USDA-ARS?s Scientific Manuscript database
Precipitation patterns and nutrient inputs impact transport of nitrate (NO3-N) and phosphorus (TP) from Midwest watersheds. Nutrient concentrations and yields from two subsurface-drained watersheds, the Little Cobb River (LCR) in southern Minnesota and the South Fork Iowa River (SFIR) in northern Io...
Software development guidelines
NASA Technical Reports Server (NTRS)
Kovalevsky, N.; Underwood, J. M.
1979-01-01
Analysis, modularization, flowcharting, existing programs and subroutines, compatibility, input and output data, adaptability to checkout, and general-purpose subroutines are summarized. Statement ordering and numbering, specification statements, variable names, arrays, arithemtical expressions and statements, control statements, input/output, and subroutines are outlined. Intermediate results, desk checking, checkout data, dumps, storage maps, diagnostics, and program timing are reviewed.
Testing an Instructional Model in a University Educational Setting from the Student's Perspective
ERIC Educational Resources Information Center
Betoret, Fernando Domenech
2006-01-01
We tested a theoretical model that hypothesized relationships between several variables from input, process and product in an educational setting, from the university student's perspective, using structural equation modeling. In order to carry out the analysis, we measured in sequential order the input (referring to students' personal…
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.
1978-01-01
The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
History of nutrient inputs to the northeastern United States, 1930-2000
NASA Astrophysics Data System (ADS)
Hale, Rebecca L.; Hoover, Joseph H.; Wollheim, Wilfred M.; Vörösmarty, Charles J.
2013-04-01
Humans have dramatically altered nutrient cycles at local to global scales. We examined changes in anthropogenic nutrient inputs to the northeastern United States (NE) from 1930 to 2000. We created a comprehensive time series of anthropogenic N and P inputs to 437 counties in the NE at 5 year intervals. Inputs included atmospheric N deposition, biological N2 fixation, fertilizer, detergent P, livestock feed, and human food. Exports included exports of feed and food and volatilization of ammonia. N inputs to the NE increased throughout the study period, primarily due to increases in atmospheric deposition and fertilizer. P inputs increased until 1970 and then declined due to decreased fertilizer and detergent inputs. Livestock consistently consumed the majority of nutrient inputs over time and space. The area of crop agriculture declined during the study period but consumed more nutrients as fertilizer. We found that stoichiometry (N:P) of inputs and absolute amounts of N matched nutritional needs (livestock, humans, crops) when atmospheric components (N deposition, N2 fixation) were not included. Differences between N and P led to major changes in N:P stoichiometry over time, consistent with global trends. N:P decreased from 1930 to 1970 due to increased inputs of P, and increased from 1970 to 2000 due to increased N deposition and fertilizer and decreases in P fertilizer and detergent use. We found that nutrient use is a dynamic product of social, economic, political, and environmental interactions. Therefore, future nutrient management must take into account these factors to design successful and effective nutrient reduction measures.
Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function
NASA Astrophysics Data System (ADS)
Seo, Sang-Wha; Kim, Yong; Choi, Han Ho
2017-11-01
This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.
NASA Astrophysics Data System (ADS)
Walley, Yasmin; Tunnicliffe, Jon; Brierley, Gary
2018-04-01
Lateral inputs from hillslopes and tributaries exert a variable impact upon the longitudinal connectivity of sediment transfer in river systems with differing drainage network configurations. Network topology influences channel slope and confinement at confluence zones, thereby affecting patterns of sediment storage and the conveyance of sediments through catchments. Rates of disturbance response, patterns of sediment propagation, and the implications for connectivity and recovery were assessed in two neighbouring catchments with differing network configurations on the East Cape of New Zealand. Both catchments were subject to forest clearing in the late 1940s and a major cyclonic storm in 1988. However, reconstruction of landslide runout pathways, and characterization of connectivity using a Tokunaga framework, demonstrates different patterns and rates of sediment transfer and storage in a dendritic network relative to a more elongate, herringbone drainage network. The dendritic network has a higher rate of sediment transfer between storage sites in successive Strahler orders, whereas longitudinal connectivity along the fourth-order mainstem is disrupted by lateral sediment inputs from multiple low-order tributaries in the more elongate, herringbone network. In both cases the most dynamic ('hotspot') reaches are associated with a high degree of network side-branching.
Interannual Variability in Intercontinental Transport
NASA Technical Reports Server (NTRS)
Gupta, Mohan; Douglass, Anne; Kawa, S. Randy; Pawson, Steven
2003-01-01
We have investigated the importance of intercontinental transport using source-receptor relationship. A global radon-like and seven regional tracers were used in three-dimensional model simulations to quantify their contributions to column burdens and vertical profiles at world-wide receptors. Sensitivity of these contributions to meteorological input was examined using different years of meteorology in two atmospheric simulations. Results show that Asian emission influences tracer distributions in its eastern downwind regions extending as far as Europe with major contributions in mid- and upper troposphere. On the western and eastern sides of the US, Asian contribution to annual average column burdens are 37% and 5% respectively with strong monthly variations. At an altitude of 10 km, these contributions are 75% and 25% respectively. North American emissions contribute more than 15% to annual average column burden and about 50% at 8 km altitude over the European region. Contributions from tropical African emissions are wide-spread in both the hemispheres. Differences in meteorological input cause non-uniform redistribution of tracer mass throughout the troposphere at all receptors. We also show that in model-model and model-data comparison, correlation analysis of tracer's spatial gradients provides an added measure of model's performance.
Evaluation of a spatially-distributed Thornthwaite water-balance model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lough, J.A.
1993-03-01
A small watershed of low relief in coastal New Hampshire was divided into hydrologic sub-areas in a geographic information system on the basis of soils, sub-basins and remotely-sensed landcover. Three variables were spatially modeled for input to 49 individual water-balances: available water content of the root zone, water input and potential evapotranspiration (PET). The individual balances were weight-summed to generate the aggregate watershed-balance, which saw 9% (48--50 mm) less annual actual-evapotranspiration (AET) compared to a lumped approach. Analysis of streamflow coefficients suggests that the spatially-distributed approach is more representative of the basin dynamics. Variation of PET by landcover accounted formore » the majority of the 9% AET reduction. Variation of soils played a near-negligible role. As a consequence of the above points, estimates of landcover proportions and annual PET by landcover are sufficient to correct a lumped water-balance in the Northeast. If remote sensing is used to estimate the landcover area, a sensor with a high spatial resolution is required. Finally, while the lower Thornthwaite model has conceptual limitations for distributed application, the upper Thornthwaite model is highly adaptable to distributed problems and may prove useful in many earth-system models.« less
NASA Astrophysics Data System (ADS)
Kang, Dongwoo; Dall'erba, Sandy
2016-04-01
Griliches' knowledge production function has been increasingly adopted at the regional level where location-specific conditions drive the spatial differences in knowledge creation dynamics. However, the large majority of such studies rely on a traditional regression approach that assumes spatially homogenous marginal effects of knowledge input factors. This paper extends the authors' previous work (Kang and Dall'erba in Int Reg Sci Rev, 2015. doi: 10.1177/0160017615572888) to investigate the spatial heterogeneity in the marginal effects by using nonparametric local modeling approaches such as geographically weighted regression (GWR) and mixed GWR with two distinct samples of the US Metropolitan Statistical Area (MSA) and non-MSA counties. The results indicate a high degree of spatial heterogeneity in the marginal effects of the knowledge input variables, more specifically for the local and distant spillovers of private knowledge measured across MSA counties. On the other hand, local academic knowledge spillovers are found to display spatially homogenous elasticities in both MSA and non-MSA counties. Our results highlight the strengths and weaknesses of each county's innovation capacity and suggest policy implications for regional innovation strategies.
Climate Change Mitigation Challenge for Wood Utilization-The Case of Finland.
Soimakallio, Sampo; Saikku, Laura; Valsta, Lauri; Pingoud, Kim
2016-05-17
The urgent need to mitigate climate change invokes both opportunities and challenges for forest biomass utilization. Fossil fuels can be substituted by using wood products in place of alternative materials and energy, but wood harvesting reduces forest carbon sink and processing of wood products requires material and energy inputs. We assessed the extended life cycle carbon emissions considering substitution impacts for various wood utilization scenarios over 100 years from 2010 onward for Finland. The scenarios were based on various but constant wood utilization structures reflecting current and anticipated mix of wood utilization activities. We applied stochastic simulation to deal with the uncertainty in a number of input variables required. According to our analysis, the wood utilization decrease net carbon emissions with a probability lower than 40% for each of the studied scenarios. Furthermore, large emission reductions were exceptionally unlikely. The uncertainty of the results were influenced clearly the most by the reduction in the forest carbon sink. There is a significant trade-off between avoiding emissions through fossil fuel substitution and reduction in forest carbon sink due to wood harvesting. This creates a major challenge for forest management practices and wood utilization activities in responding to ambitious climate change mitigation targets.
Poças, Maria F; Oliveira, Jorge C; Brandsch, Rainer; Hogg, Timothy
2010-07-01
The use of probabilistic approaches in exposure assessments of contaminants migrating from food packages is of increasing interest but the lack of concentration or migration data is often referred as a limitation. Data accounting for the variability and uncertainty that can be expected in migration, for example, due to heterogeneity in the packaging system, variation of the temperature along the distribution chain, and different time of consumption of each individual package, are required for probabilistic analysis. The objective of this work was to characterize quantitatively the uncertainty and variability in estimates of migration. A Monte Carlo simulation was applied to a typical solution of the Fick's law with given variability in the input parameters. The analysis was performed based on experimental data of a model system (migration of Irgafos 168 from polyethylene into isooctane) and illustrates how important sources of variability and uncertainty can be identified in order to refine analyses. For long migration times and controlled conditions of temperature the affinity of the migrant to the food can be the major factor determining the variability in the migration values (more than 70% of variance). In situations where both the time of consumption and temperature can vary, these factors can be responsible, respectively, for more than 60% and 20% of the variance in the migration estimates. The approach presented can be used with databases from consumption surveys to yield a true probabilistic estimate of exposure.
Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.
Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda
2015-01-01
The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.
Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality
Woodley, Hayden J. R.; Bourdage, Joshua S.; Ogunfowora, Babatunde; Nguyen, Brenda
2016-01-01
The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called “Benevolents.” Individuals low on equity sensitivity are more outcome oriented, and are described as “Entitleds.” Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity. PMID:26779102
NASA Astrophysics Data System (ADS)
Mugo, R. M.; Limaye, A. S.; Nyaga, J. W.; Farah, H.; Wahome, A.; Flores, A.
2016-12-01
The water quality of inland lakes is largely influenced by land use and land cover changes within the lake's catchment. In Africa, some of the major land use changes are driven by a number of factors, which include urbanization, intensification of agricultural practices, unsustainable farm management practices, deforestation, land fragmentation and degradation. Often, the impacts of these factors are observable on changes in the land cover, and eventually in the hydrological systems. When the natural vegetation cover is reduced or changed, the surface water flow patterns, water and nutrient retention capacities are also changed. This can lead to high nutrient inputs into lakes, leading to eutrophication, siltation and infestation of floating aquatic vegetation. To assess the relationship between land use and land cover changes in part of the Lake Victoria Basin, a series of land cover maps were derived from Landsat imagery. Changes in land cover were identified through change maps and statistics. Further, the surface water chlorophyll-a concentration and turbidity were derived from MODIS-Aqua data for Lake Victoria. Chlrophyll-a and turbidity are good proxy indicators of nutrient inputs and siltation respectively. The trends in chlorophyll-a and turbidity concentrations were analyzed and compared to the land cover changes over time. Certain land cover changes related to agriculture and urban development were clearly identifiable. While these changes might not be solely responsible for variability in chlrophyll-a and turbidity concentrations in the lake, they are potentially contributing factors to this problem. This work illustrates the importance of addressing watershed degradation while seeking to solve water quality related problems.
[Variability in nursing workload within Swiss Diagnosis Related Groups].
Baumberger, Dieter; Bürgin, Reto; Bartholomeyczik, Sabine
2014-04-01
Nursing care inputs represent one of the major cost components in the Swiss Diagnosis Related Group (DRG) structure. High and low nursing workloads in individual cases are supposed to balance out via the DRG group. Research results indicating possible problems in this area cannot be reliably extrapolated to SwissDRG. An analysis of nursing workload figures with DRG indicators was carried out in order to decide whether there is a need to develop SwissDRG classification criteria that are specific to nursing care. The case groups were determined with SwissDRG 0.1, and nursing workload with LEP Nursing 2. Robust statistical methods were used. The evaluation of classification accuracy was carried out with R2 as the measurement of variance reduction and the coefficient of homogeneity (CH). To ensure reliable conclusions, statistical tests with bootstrapping methods were performed. The sample included 213 groups with a total of 73930 cases from ten hospitals. The DRG classification was seen to have limited explanatory power for variability in nursing workload inputs, both for all cases (R2 = 0.16) and for inliers (R2 = 0.32). Nursing workload homogeneity was statistically significant unsatisfactory (CH < 0.67) in 123 groups, including 24 groups in which it was significant defective (CH < 0.60). Therefore, there is a high risk of high and low nursing workloads not balancing out in these groups, and, as a result, of financial resources being wrongly allocated. The development of nursing-care-specific SwissDRG classification criteria for improved homogeneity and variance reduction is therefore indicated.
Kibria, Golam; Lau, T C; Wu, Rudolf
2012-12-01
The "Artificial mussel" (AM), a novel passive sampling technology, was used for the first time in Australia in freshwater to monitor and assess the risk of trace metals (Cd, Cu, Hg, Pb, and Zn). AMs were deployed at 10 sites within the Goulburn-Murray Water catchments, Victoria, Australia during a dry year (2009-2010) and a wet year (2010-2011). Our results showed that the AMs accumulated all the five metals. Cd, Pb, Hg were detected during the wet year but below detection limits during the dry year. At some sites close to orchards, vine yards and farming areas, elevated levels of Cu were clearly evident during the dry year, while elevated levels of Zn were found during the wet year; the Cu indicates localized inputs from the agricultural application of copper fungicide. The impacts from old mines were significantly less compared 'hot spots'. Our study demonstrated that climate variability (dry, wet years) can influence the metal inputs to waterways via different transport pathways. Using the AMs, we were able to identify various 'hot spots' of heavy metals, which may pose a potential risk to aquatic ecosystems (sub-lethal effects to fish) and public (via food chain metal bioaccumulation and biomagnification) in the Goulburn-Murray Water catchments. The State Protection Policy exempted artificial channels and drains from protection of beneficial use (including protection of aquatic ecosystems) and majority of sites ('hot spots') were located within artificial irrigation channels. Copyright © 2012 Elsevier Ltd. All rights reserved.
Connecting the Mississippi River with Carbon Variability in the Gulf of Mexico
NASA Astrophysics Data System (ADS)
Xue, Z. G.; He, R.; Fennel, K.; Cai, W. J.; Lohrenz, S. E.; Huang, W. J.; Tian, H.; Ren, W.
2016-02-01
To understand the linkage between landuse/land-cover change within the Mississippi basin and the carbon dynamics in the Gulf of Mexico, a three-dimensional coupled physical-biogeochemical model was used to the examine temporal and spatial variability of surface ocean pCO2 in the Gulf of Mexico (GoM). The model is driven by realistic atmospheric forcing, open boundary conditions from a data-assimilative global ocean circulation model, and freshwater and terrestrial nutrient and carbon input from major rivers provided by the Dynamic Land Ecosystem Model (DLEM). A seven-year model hindcast (2004-2010) was performed and was validated against the recently updated Lamont-Doherty Earth Observatory global ocean carbon dataset. Model simulated seawater pCO2 and air-sea CO2 flux are in good agreement with in-situ measurements. An inorganic carbon budget was estimated based on the multi-year mean of the model results. Overall, the GoM is a sink of atmospheric CO2 with a flux of 0.92 × 1012 mol C yr-1, which, together with the enormous fluvial carbon input, is balanced by carbon export through the Loop Current. In a sensitivity experiment with all biological sources and sinks of carbon disabled surface pCO2 was elevated by 70 ppm, suggesting that biological uptake is the most important reason for the simulated CO2 sink. The impact from landuse and land-cover changes within the Mississippi River basin on coastal pCO2 dynamics is also discussed based on a scenario run driven by river conditions during the 1904-1910 provided by the DLEM model.
Laufer, Yocheved; Elboim-Gabyzon, Michal
2011-01-01
Somatosensory input may lead to long-lasting cortical plasticity enhanced by motor recovery in patients with neurological impairments. Sensory transcutaneous electrical stimulation (TENS) is a relatively risk-free and easy-to-implement modality for rehabilitation. The authors systematically examine the effects of sensory TENS on motor recovery after stroke. Eligible randomized or quasi-randomized trials were identified via searches of computerized databases. Two assessors reviewed independently the eligibility and methodological quality of the retrieved articles. In all, 15 articles satisfied the inclusion criteria. Methodological quality was generally good, with a mean (standard deviation) PEDro score of 6.7/10 (1.2). Although the majority of studies reported significant effects on at least 1 outcome measure, effect sizes were generally small. Meta-analysis could not be performed for the majority of outcome measures because of variability between studies and insufficient data. A moderate effect was determined for force production of the ankle dorsiflexors and for the Timed Up and Go test. Sensory stimulation via TENS may be beneficial to enhance aspects of motor recovery following a stroke, particularly when used in combination with active training. Because of the great variability between studies, particularly in terms of the timing of the intervention after the stroke, the outcome measures used, and the stimulation protocols, insufficient data are available to provide guidelines about strategies and efficacy.
NASA Astrophysics Data System (ADS)
Grafton, Q.
2014-12-01
This presentation reviews the pressures, threats and risks to food availability and water based on projected global population growth to 2050. An original model, the Global Food and Water System (GWFS) Platform, is introduced and used to explore food deficits under various scenarios and also the implications for future water gaps. The GWFS platform can assess the effects of crop productivity on food production and incorporates data from 19 major food producing nations to generate a global projection of food and water gaps. Preliminary results indicate that while crop food supply is able to meet global crop food demand by 2050, this is possible only with 'input intensification' that includes increased average rates of water and fertiliser use per hectare and at least a 20% increase in average yield productivity (once and for all). Increased water withdrawals for agriculture with input intensification would, absent any increases in withdrawals in the manufacturing or household uses, would place the world very close to the limits of a safe operating space in terms overall water use by 2050. While global crop food supply can meet projected global demand with input intensification, this still results in large and growing crop food deficits to 2050 in some countries, especially in South Asia, where climate change is expected to increase variability of rainfall and, in some places, reduce overall freshwater availability. While beyond the confines of the GWFS Platform the implications of expected water withdrawals on the environment in particular locations are also briefly reviewed.
Stottlemyer, R.; Edmonds, R.; Scherbarth, L.; Urbanczyk, K.; Van Miegroet, H.; Zak, J.
2002-01-01
In 1998, the USGS Global Change program funded research for a network of Long-Term Reference Ecosystems initially established in national parks and funded by the National Park Service. The network included Noland Divide, Great Smoky Mountains National Park, Tennessee; Pine Canyon, Big Ben National park, Texas; West Twin Creek, Olympic National Park, Washingtona?? Wallace Lake, Isle Royale National Park, Michigan; and the Asik watershed, Noatak National Preserve, Alaska. The watershed ecosystem model was used since this approach permits additional statistical power in detection of trends among variables, and the watershed in increasingly a land unit used in resource management and planning. The ecosystems represent a major fraction of lands administered by the National Park Service, and were chosen generally for the contrasts among sites. For example, tow of the site, Noland and West Twin, are characterized by high precipitation amounts, but Noland receives some of the highest atmospheric nitrogen (N) inputs in North America. In contrast, Pine Canyon and Asik are warm and cold desert sites respectively. The Asik watershed receives <1% the atmospheric N inputs Noland receives. The Asik site is at the northern extent (treeline) of the boreal biome in the North America while Wallace is at the southern ecotone between boreal and northern hardwoods. The research goal for these sites is to gain a basic understanding of ecosystem structure and function, and the response to global change especially atmospheric inputs and climate.
Wellard Kelly, Holly A.; Rosi-Marshall, Emma J.; Kennedy, Theodore A.; Hall, Robert O.; Cross, Wyatt F.; Baxter, Colden V.
2013-01-01
Physical changes to rivers associated with large dams (e.g., water temperature) directly alter macroinvertebrate assemblages. Large dams also may indirectly alter these assemblages by changing the food resources available to support macroinvertebrate production. We examined the diets of the 4 most common macroinvertebrate taxa in the Colorado River through Glen and Grand Canyons, seasonally, at 6 sites for 2.5 y. We compared macroinvertebrate diet composition to the composition of epilithon (rock and cliff faces) communities and suspended organic seston to evaluate the degree to which macroinvertebrate diets tracked downstream changes in resource availability. Diets contained greater proportions of algal resources in the tailwater of Glen Canyon Dam and more terrestrial-based resources at sites downstream of the 1st major tributary. As predicted, macroinvertebrate diets tracked turbidity-driven changes in resource availability, and river turbidity partially explained variability in macroinvertebrate diets. The relative proportions of resources assimilated by macroinvertebrates ranged from dominance by algae to terrestrial-based resources, despite greater assimilation efficiencies for algal than terrestrial C. Terrestrial resources were most important during high turbidity conditions, which occurred during the late-summer monsoon season (July–October) when tributaries contributed large amounts of organic matter to the mainstem and suspended sediments reduced algal production. Macroinvertebrate diets were influenced by seasonal changes in tributary inputs and turbidity, a result suggesting macroinvertebrate diets in regulated rivers may be temporally dynamic and driven by tributary inputs.
USDA-ARS?s Scientific Manuscript database
Soil organic matter (SOM) is a very important compartment of the biosphere: it represents the largest dynamic carbon (C) pool where the C is stored for the longest time period. Root inputs, as exudates and root slush, represent a major, where not the largest, annual contribution to soil C input. Roo...
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu
Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT,more » status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation.« less
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
NASA Astrophysics Data System (ADS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.
Bittner, J.W.; Biscardi, R.W.
1991-03-19
An electronic measurement circuit is disclosed for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals. 2 figures.
Bittner, John W.; Biscardi, Richard W.
1991-01-01
An electronic measurement circuit for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals.
NASA Astrophysics Data System (ADS)
Ruiz, Laurent; Varma, Murari R. R.; Kumar, M. S. Mohan; Sekhar, M.; Maréchal, Jean-Christophe; Descloitres, Marc; Riotte, Jean; Kumar, Sat; Kumar, C.; Braun, Jean-Jacques
2010-01-01
SummaryAccurate estimations of water balance are needed in semi-arid and sub-humid tropical regions, where water resources are scarce compared to water demand. Evapotranspiration plays a major role in this context, and the difficulty to quantify it precisely leads to major uncertainties in the groundwater recharge assessment, especially in forested catchments. In this paper, we propose to assess the importance of deep unsaturated regolith and water uptake by deep tree roots on the groundwater recharge process by using a lumped conceptual model (COMFORT). The model is calibrated using a 5 year hydrological monitoring of an experimental watershed under dry deciduous forest in South India (Mule Hole watershed). The model was able to simulate the stream discharge as well as the contrasted behaviour of groundwater table along the hillslope. Water balance simulated for a 32 year climatic time series displayed a large year-to-year variability, with alternance of dry and wet phases with a time period of approximately 14 years. On an average, input by the rainfall was 1090 mm year -1 and the evapotranspiration was about 900 mm year -1 out of which 100 mm year -1 was uptake from the deep saprolite horizons. The stream flow was 100 mm year -1 while the groundwater underflow was 80 mm year -1. The simulation results suggest that (i) deciduous trees can uptake a significant amount of water from the deep regolith, (ii) this uptake, combined with the spatial variability of regolith depth, can account for the variable lag time between drainage events and groundwater rise observed for the different piezometers and (iii) water table response to recharge is buffered due to the long vertical travel time through the deep vadose zone, which constitutes a major water reservoir. This study stresses the importance of long term observations for the understanding of hydrological processes in tropical forested ecosystems.
An exact algebraic solution of the infimum in H-infinity optimization with output feedback
NASA Technical Reports Server (NTRS)
Chen, Ben M.; Saberi, Ali; Ly, Uy-Loi
1991-01-01
This paper presents a simple and noniterative procedure for the computation of the exact value of the infimum in the standard H-infinity-optimal control with output feedback. The problem formulation is general and does not place any restrictions on the direct feedthrough terms between the control input and the controlled output variables, and between the disturbance input and the measurement output variables. The method is applicable to systems that satisfy (1) the transfer function from the control input to the controlled output is right-invertible and has no invariant zeros on the j(w) axis and, (2) the transfer function from the disturbance to the measurement output is left-invertible and has no invariant zeros on the j(w) axis. A set of necessary and sufficient conditions for the solvability of H-infinity-almost disturbance decoupling problem via measurement feedback with internal stability is also given.
Scenario planning for water resource management in semi arid zone
NASA Astrophysics Data System (ADS)
Gupta, Rajiv; Kumar, Gaurav
2018-06-01
Scenario planning for water resource management in semi arid zone is performed using systems Input-Output approach of time domain analysis. This approach derived the future weights of input variables of the hydrological system from their precedent weights. Input variables considered here are precipitation, evaporation, population and crop irrigation. Ingles & De Souza's method and Thornthwaite model have been used to estimate runoff and evaporation respectively. Difference between precipitation inflow and the sum of runoff and evaporation has been approximated as groundwater recharge. Population and crop irrigation derived the total water demand. Compensation of total water demand by groundwater recharge has been analyzed. Further compensation has been evaluated by proposing efficient methods of water conservation. The best measure to be adopted for water conservation is suggested based on the cost benefit analysis. A case study for nine villages in Chirawa region of district Jhunjhunu, Rajasthan (India) validates the model.
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
Automatic insulation resistance testing apparatus
Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.
2005-06-14
An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.
A stochastic model of input effectiveness during irregular gamma rhythms.
Dumont, Grégory; Northoff, Georg; Longtin, André
2016-02-01
Gamma-band synchronization has been linked to attention and communication between brain regions, yet the underlying dynamical mechanisms are still unclear. How does the timing and amplitude of inputs to cells that generate an endogenously noisy gamma rhythm affect the network activity and rhythm? How does such "communication through coherence" (CTC) survive in the face of rhythm and input variability? We present a stochastic modelling approach to this question that yields a very fast computation of the effectiveness of inputs to cells involved in gamma rhythms. Our work is partly motivated by recent optogenetic experiments (Cardin et al. Nature, 459(7247), 663-667 2009) that tested the gamma phase-dependence of network responses by first stabilizing the rhythm with periodic light pulses to the interneurons (I). Our computationally efficient model E-I network of stochastic two-state neurons exhibits finite-size fluctuations. Using the Hilbert transform and Kuramoto index, we study how the stochastic phase of its gamma rhythm is entrained by external pulses. We then compute how this rhythmic inhibition controls the effectiveness of external input onto pyramidal (E) cells, and how variability shapes the window of firing opportunity. For transferring the time variations of an external input to the E cells, we find a tradeoff between the phase selectivity and depth of rate modulation. We also show that the CTC is sensitive to the jitter in the arrival times of spikes to the E cells, and to the degree of I-cell entrainment. We further find that CTC can occur even if the underlying deterministic system does not oscillate; quasicycle-type rhythms induced by the finite-size noise retain the basic CTC properties. Finally a resonance analysis confirms the relative importance of the I cell pacing for rhythm generation. Analysis of whole network behaviour, including computations of synchrony, phase and shifts in excitatory-inhibitory balance, can be further sped up by orders of magnitude using two coupled stochastic differential equations, one for each population. Our work thus yields a fast tool to numerically and analytically investigate CTC in a noisy context. It shows that CTC can be quite vulnerable to rhythm and input variability, which both decrease phase preference.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted. PMID:24798347
Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A
2013-01-15
This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.
Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215
NASA Astrophysics Data System (ADS)
Román, Sara; Vanreusel, Ann; Romano, Chiara; Ingels, Jeroen; Puig, Pere; Company, Joan B.; Martin, Daniel
2016-11-01
We investigated the natural and anthropogenic drivers controlling the spatiotemporal distribution of the meiofauna in the submarine Blanes Canyon, and its adjacent western slope (NW Mediterranean margin of the Iberian Peninsula). We analyzed the relationships between the main sedimentary environmental variables (i.e. grain size, Chl-a, Chl-a: phaeopigments, CPE, organic carbon and total nitrogen) and the density and structure of the meiofaunal assemblages along a bathymetric gradient (from 500 to 2000 m depth) in spring and autumn of 2012 and 2013. Twenty-one and 16 major taxa were identified for respectively the canyon and slope, where the assemblages were always dominated by nematodes. The gradual decreasing meiofaunal densities with increasing depth at the slope showed little variability among stations and corresponded with a uniform pattern of food availability. The canyon was environmentally much more variable and sediments contained greater amounts of food resources (Chl-a and CPE) throughout, leading not only to increased meiofaunal densities compared to the slope, but also different assemblages in terms of composition and structure. This variability in the canyon is only partly explained by seasonal food inputs. The high densities found at 900 m and 1200 m depth coincided with significant increases in food availability compared to shallower and deeper stations in the canyon. Our results suggest that the disruption in expected bathymetric decrease in densities at 900-1200 m water depth coincided with noticeable changes in the environmental variables typical for disturbance and deposition events (e.g., higher sand content and CPE), evoking the hypothesis of an anthropogenic effect at these depths in the canyon. The increased downward particle fluxes at 900-1200 m depth caused by bottom trawling along canyon flanks, as reported in previous studies, support our hypothesis and allude to a substantial anthropogenic factor influencing benthic assemblages at these depths. The possible relationships of the observed patterns and some major natural environmental (e.g., surface productivity or dense shelf water cascading) and anthropogenic (e.g. the lateral advection and downward transport of food-enriched sediments resuspended by the daily canyon-flank trawling activities) drivers are discussed.
NASA Astrophysics Data System (ADS)
Kuo, L.; Louchouarn, P.; Herbert, B.
2008-12-01
Chars/charcoals are solid combustion residues derived from biomass burning. They represent one of the major classes in the pyrogenic organic residues, the so-called black carbon (BC), and have highly heterogeneous nature due to the highly variable combustion conditions during biomass burning. More and more attention has been given to characterize and quantify the inputs of charcoals to different environmental compartments since they also share the common features of BC, such as recalcitrant nature and strong sorption capacity on hydrophobic organic pollutants. Moreover, such inputs also imply the thermal alteration of terrestrial organic matter, as well as corresponding biomarkers such as lignin. Lignin is considered to be among the best-preserved components of vascular plants after deposition, due to its relative stability on biodegradation. This macropolymer is an important contributor to soil organic matter (SOM) and its presence in aquatic environments helps trace the input of terrigenous organic matter to such systems. The yields and specific ratios of lignin oxidation products (LOP) from alkaline cupric oxide (CuO) oxidation method have been extensively used to identify the structure of plant lignin and estimate inputs of plant carbon to soils and aquatic systems, as well as evaluate the diagenetic status of lignin. Although the fate of lignin under microbiological and photochemical degradation pathways have been thoroughly addressed in the literature, studies assessing the impact of thermal degradation on lignin structure and signature are scarce. In the present study, we used three suites of lab-made chars (honey mesquite, cordgrass, and loblolly pine) to study the impact of combustion on lignin and their commonly used parameters. Our results show that combustion can greatly decrease the yields of the eight major lignin phenols (vanillyl, syringyl, and cinnamyl phenols) with no lignin phenols detected in any synthetic char produced at ≥ 400°C. With increasing combustion temperature, internal phenol ratios (S/V and C/V) show a two-stage change with an initial increase at low temperatures followed by marked and rapid decreases when temperatures reach 200- 250°C. The acid/aldehyde ratios of vanillyl phenols ((Ad/Al)v) and syringyl phenols ((Ad/Al)s) all increase with increasing combustion temperature and duration and reach a maximum values at 300- 350°C, regardless plant species. The highly elevated acid/aldehyde ratios reached in some cases exceed the reported values of humic and fulvic acids extracted from soils and sediments. We applied these empirical data in mixing models to estimate the potential effects of charcoal inputs on the observed lignin signatures in environmental mixtures. The shifts in lignin signatures are strongly influenced both by the characteristics of the charcoal incorporated and the proportion of charcoal in the mixture. We validated our observations with two sets of environmental samples, including soils from control burning sites, and a sediment core from a wetland with evidence of charcoal inputs, showing that the presences of charcoals do alter the observed lignin signals in these samples. Such a thermal "interference" on lignin parameters should thus be considered in environmental mixtures with recognized char input.
Code Development in Coupled PARCS/RELAP5 for Supercritical Water Reactor
Hu, Po; Wilson, Paul
2014-01-01
The new capability is added to the existing coupled code package PARCS/RELAP5, in order to analyze SCWR design under supercritical pressure with the separated water coolant and moderator channels. This expansion is carried out on both codes. In PARCS, modification is focused on extending the water property tables to supercritical pressure, modifying the variable mapping input file and related code module for processing thermal-hydraulic information from separated coolant/moderator channels, and modifying neutronics feedback module to deal with the separated coolant/moderator channels. In RELAP5, modification is focused on incorporating more accurate water properties near SCWR operation/transient pressure and temperature in themore » code. Confirming tests of the modifications is presented and the major analyzing results from the extended codes package are summarized.« less
Hoppie, Lyle O.
1982-01-12
Disclosed are several embodiments of a regenerative braking device for an automotive vehicle. The device includes a plurality of rubber rollers (24, 26) mounted for rotation between an input shaft (14) connectable to the vehicle drivetrain and an output shaft (16) which is drivingly connected to the input shaft by a variable ratio transmission (20). When the transmission ratio is such that the input shaft rotates faster than the output shaft, the rubber rollers are torsionally stressed to accumulate energy, thereby slowing the vehicle. When the transmission ratio is such that the output shaft rotates faster than the input shaft, the rubber rollers are torsionally relaxed to deliver accumulated energy, thereby accelerating or driving the vehicle.
Post, R.F.
1958-11-11
An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.
The Army’s Local Economic Effects
2015-01-01
region. An I/O model is a representation of the linkages between major sectors of a regional economy in which each sector of the regional economy is...assumed to require inputs from the other sectors to produce output. These inputs can come from local sources within the region, from other domestic...ment of the Army in a congressional district. An I/O model is a representation of the linkages between major sectors of a regional economy (and, to a
Ioannidis, J P; McQueen, P G; Goedert, J J; Kaslow, R A
1998-03-01
Complex immunogenetic associations of disease involving a large number of gene products are difficult to evaluate with traditional statistical methods and may require complex modeling. The authors evaluated the performance of feed-forward backpropagation neural networks in predicting rapid progression to acquired immunodeficiency syndrome (AIDS) for patients with human immunodeficiency virus (HIV) infection on the basis of major histocompatibility complex variables. Networks were trained on data from patients from the Multicenter AIDS Cohort Study (n = 139) and then validated on patients from the DC Gay cohort (n = 102). The outcome of interest was rapid disease progression, defined as progression to AIDS in <6 years from seroconversion. Human leukocyte antigen (HLA) variables were selected as network inputs with multivariate regression and a previously described algorithm selecting markers with extreme point estimates for progression risk. Network performance was compared with that of logistic regression. Networks with 15 HLA inputs and a single hidden layer of five nodes achieved a sensitivity of 87.5% and specificity of 95.6% in the training set, vs. 77.0% and 76.9%, respectively, achieved by logistic regression. When validated on the DC Gay cohort, networks averaged a sensitivity of 59.1% and specificity of 74.3%, vs. 53.1% and 61.4%, respectively, for logistic regression. Neural networks offer further support to the notion that HIV disease progression may be dependent on complex interactions between different class I and class II alleles and transporters associated with antigen processing variants. The effect in the current models is of moderate magnitude, and more data as well as other host and pathogen variables may need to be considered to improve the performance of the models. Artificial intelligence methods may complement linear statistical methods for evaluating immunogenetic associations of disease.
Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment
NASA Astrophysics Data System (ADS)
Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.
2015-01-01
In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.
ERIC Educational Resources Information Center
Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.
2017-01-01
This study tested the impact of child-directed language input on language development in Spanish-English bilingual infants (N = 25, 11- and 14-month-olds from the Seattle metropolitan area), across languages and independently for each language, controlling for socioeconomic status. Language input was characterized by social interaction variables,…
TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow
NASA Technical Reports Server (NTRS)
Chang, J. F.; Lan, C. Edward
1987-01-01
The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.
MURI: Impact of Oceanographic Variability on Acoustic Communications
2011-09-01
multiplexing ( OFDM ), multiple- input/multiple-output ( MIMO ) transmissions, and multi-user single-input/multiple-output (SIMO) communications. Lastly... MIMO - OFDM communications: Receiver design for Doppler distorted underwater acoustic channels,” Proc. Asilomar Conf. on Signals, Systems, and... MIMO ) will be of particular interest. Validating experimental data will be obtained during the ONR acoustic communications experiment in summer 2008
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
NASA Astrophysics Data System (ADS)
Carter, Jeffrey R.; Simon, Wayne E.
1990-08-01
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, Martin M.
Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less
Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M
2017-10-01
Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Violette, Daniel M.
Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
Variability of perceptual multistability: from brain state to individual trait
Kleinschmidt, Andreas; Sterzer, Philipp; Rees, Geraint
2012-01-01
Few phenomena are as suitable as perceptual multistability to demonstrate that the brain constructively interprets sensory input. Several studies have outlined the neural circuitry involved in generating perceptual inference but only more recently has the individual variability of this inferential process been appreciated. Studies of the interaction of evoked and ongoing neural activity show that inference itself is not merely a stimulus-triggered process but is related to the context of the current brain state into which the processing of external stimulation is embedded. As brain states fluctuate, so does perception of a given sensory input. In multistability, perceptual fluctuation rates are consistent for a given individual but vary considerably between individuals. There has been some evidence for a genetic basis for these individual differences and recent morphometric studies of parietal lobe regions have identified neuroanatomical substrates for individual variability in spontaneous switching behaviour. Moreover, disrupting the function of these latter regions by transcranial magnetic stimulation yields systematic interference effects on switching behaviour, further arguing for a causal role of these regions in perceptual inference. Together, these studies have advanced our understanding of the biological mechanisms by which the brain constructs the contents of consciousness from sensory input. PMID:22371620
Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator
NASA Astrophysics Data System (ADS)
Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.
2018-04-01
The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.
Moreira, Fabiana Tavares; Prantoni, Alessandro Lívio; Martini, Bruno; de Abreu, Michelle Alves; Stoiev, Sérgio Biato; Turra, Alexander
2016-01-15
Microplastics such as pellets have been reported for many years on sandy beaches around the globe. Nevertheless, high variability is observed in their estimates and distribution patterns across the beach environment are still to be unravelled. Here, we investigate the small-scale temporal and spatial variability in the abundance of pellets in the intertidal zone of a sandy beach and evaluate factors that can increase the variability in data sets. The abundance of pellets was estimated during twelve consecutive tidal cycles, identifying the position of the high tide between cycles and sampling drift-lines across the intertidal zone. We demonstrate that beach dynamic processes such as the overlap of strandlines and artefacts of the methods can increase the small-scale variability. The results obtained are discussed in terms of the methodological considerations needed to understand the distribution of pellets in the beach environment, with special implications for studies focused on patterns of input. Copyright © 2015 Elsevier Ltd. All rights reserved.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
Local and Long-Range Circuit Connections to Hilar Mossy Cells in the Dentate Gyrus
Sun, Yanjun; Grieco, Steven F.; Holmes, Todd C.
2017-01-01
Abstract Hilar mossy cells are the prominent glutamatergic cell type in the dentate hilus of the dentate gyrus (DG); they have been proposed to have critical roles in the DG network. To better understand how mossy cells contribute to DG function, we have applied new viral genetic and functional circuit mapping approaches to quantitatively map and compare local and long-range circuit connections of mossy cells and dentate granule cells in the mouse. The great majority of inputs to mossy cells consist of two parallel inputs from within the DG: an excitatory input pathway from dentate granule cells and an inhibitory input pathway from local DG inhibitory neurons. Mossy cells also receive a moderate degree of excitatory and inhibitory CA3 input from proximal CA3 subfields. Long range inputs to mossy cells are numerically sparse, and they are only identified readily from the medial septum and the septofimbrial nucleus. In comparison, dentate granule cells receive most of their inputs from the entorhinal cortex. The granule cells receive significant synaptic inputs from the hilus and the medial septum, and they also receive direct inputs from both distal and proximal CA3 subfields, which has been underdescribed in the existing literature. Our slice-based physiological mapping studies further supported the identified circuit connections of mossy cells and granule cells. Together, our data suggest that hilar mossy cells are major local circuit integrators and they exert modulation of the activity of dentate granule cells as well as the CA3 region through “back-projection” pathways. PMID:28451637
Ruddy, Barbara C.; Lorenz, David L.; Mueller, David K.
2006-01-01
Nutrient input data for fertilizer use, livestock manure, and atmospheric deposition from various sources were estimated and allocated to counties in the conterminous United States for the years 1982 through 2001. These nationally consistent nutrient input data are needed by the National Water-Quality Assessment Program for investigations of stream- and ground-water quality. For nitrogen, the largest source was farm fertilizer; for phosphorus, the largest sources were farm fertilizer and livestock manure. Nutrient inputs from fertilizer use in nonfarm areas, while locally important, were an order of magnitude smaller than inputs from other sources. Nutrient inputs from all sources increased between 1987 and 1997, but the relative proportions of nutrients from each source were constant. Farm-fertilizer inputs were highest in the upper Midwest, along eastern coastal areas, and in irrigated areas of the West. Nonfarm-fertilizer use was similar in major metropolitan areas throughout the Nation, but was more extensive in the more populated Eastern and Central States and in California. Areas of greater manure inputs were located throughout the South-central and Southeastern States and in scattered areas of the West. Nitrogen deposition from the atmosphere generally increased from west to east and is related to the location of major sources and the effects of precipitation and prevailing winds. These nutrient-loading data at the county level are expected to be the fundamental basis for national and regional assessments of water quality for the National Water-Quality Assessment Program and other large-scale programs.
Biological control of appetite: A daunting complexity.
MacLean, Paul S; Blundell, John E; Mennella, Julie A; Batterham, Rachel L
2017-03-01
This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) titled "Self-Regulation of Appetite-It's Complicated," which focused on the biological aspects of appetite regulation. This review summarizes the key biological inputs of appetite regulation and their implications for body weight regulation. These discussions offer an update of the long-held, rigid perspective of an "adipocentric" biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. It is only beginning to be understood how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally considered is a common theme that pervaded the workshop discussions, which was individual variability. It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. © 2017 The Obesity Society.
Biological Control of Appetite: A Daunting Complexity
MacLean, Paul S.; Blundell, John E.; Mennella, Julie A.; Batterham, Rachel L.
2017-01-01
Objective This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) entitled, “Self-Regulation of Appetite, It's Complicated,” which focused on the biological aspects of appetite regulation. Methods Here we summarize the key biological inputs of appetite regulation and their implications for body weight regulation. Results These discussions offer an update of the long-held, rigid perspective of an “adipocentric” biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. We are only beginning to understand how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally, we consider a common theme that pervaded the workshop discussions, which was individual variability. Conclusions It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. PMID:28229538
ERIC Educational Resources Information Center
Aguilar, Jessica M.; Plante, Elena; Sandoval, Michelle
2018-01-01
Purpose: Variability in the input plays an important role in language learning. The current study examined the role of object variability for new word learning by preschoolers with specific language impairment (SLI). Method: Eighteen 4- and 5-year-old children with SLI were taught 8 new words in 3 short activities over the course of 3 sessions.…
Catecholamines alter the intrinsic variability of cortical population activity and perception
Avramiea, Arthur-Ervin; Nolte, Guido; Engel, Andreas K.; Linkenkaer-Hansen, Klaus; Donner, Tobias H.
2018-01-01
The ascending modulatory systems of the brain stem are powerful regulators of global brain state. Disturbances of these systems are implicated in several major neuropsychiatric disorders. Yet, how these systems interact with specific neural computations in the cerebral cortex to shape perception, cognition, and behavior remains poorly understood. Here, we probed into the effect of two such systems, the catecholaminergic (dopaminergic and noradrenergic) and cholinergic systems, on an important aspect of cortical computation: its intrinsic variability. To this end, we combined placebo-controlled pharmacological intervention in humans, recordings of cortical population activity using magnetoencephalography (MEG), and psychophysical measurements of the perception of ambiguous visual input. A low-dose catecholaminergic, but not cholinergic, manipulation altered the rate of spontaneous perceptual fluctuations as well as the temporal structure of “scale-free” population activity of large swaths of the visual and parietal cortices. Computational analyses indicate that both effects were consistent with an increase in excitatory relative to inhibitory activity in the cortical areas underlying visual perceptual inference. We propose that catecholamines regulate the variability of perception and cognition through dynamically changing the cortical excitation–inhibition ratio. The combined readout of fluctuations in perception and cortical activity we established here may prove useful as an efficient and easily accessible marker of altered cortical computation in neuropsychiatric disorders. PMID:29420565
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste
The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more importantmore » than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall.« less
NASA Technical Reports Server (NTRS)
Adler, R. F.; Gu, G.; Curtis, S.; Huffman, G. J.; Bolvin, D. T.; Nelkin, E. J.
2005-01-01
The Global Precipitation Climatology Project (GPCP) 25-year precipitation data set is used to evaluate the variability and extremes on global and regional scales. The variability of precipitation year-to-year is evaluated in relation to the overall lack of a significant global trend and to climate events such as ENSO and volcanic eruptions. The validity of conclusions and limitations of the data set are checked by comparison with independent data sets (e.g., TRMM). The GPCP data set necessarily has a heterogeneous time series of input data sources, so part of the assessment described above is to test the initial results for potential influence by major data boundaries in the record. Regional trends, or inter-decadal changes, are also analyzed to determine validity and correlation with other long-term data sets related to the hydrological cycle (e.g., clouds and ocean surface fluxes). Statistics of extremes (both wet and dry) are analyzed at the monthly time scale for the 25 years. A preliminary result of increasing frequency of extreme monthly values will be a focus to determine validity. Daily values for an eight-year are also examined for variation in extremes and compared to the longer monthly-based study.
Application of a CROPWAT Model to Analyze Crop Yields in Nicaragua
NASA Astrophysics Data System (ADS)
Doria, R.; Byrne, J. M.
2013-12-01
ABSTRACT Changes in climate are likely to influence crop yields due to varying evapotranspiration and precipitation over agricultural regions. In Nicaragua, agriculture is extensive, with new areas of land brought into production as the population increases. Nicaraguan staple food items (maize and beans) are produced mostly by small scale farmers with less than 10 hectares, but they are critical for income generation and food security for rural communities. Given that the majority of these farmers are dependent on rain for crop irrigation, and that maize and beans are sensitive to variations in temperature and rainfall patterns, the present study was undertaken to assess the impact of climate change on these crop yields. Climate data were generated per municipio representing the three major climatic zones of the country: the wet Pacific lowland, the cooler Central highland, and the Caribbean lowland. Historical normal climate data from 1970-2000 (baseline period) were used as input to CROPWAT model to analyze the potential and actual evapotranspiration (ETo and ETa, respectively) that affects crop yields. Further, generated local climatic data of future years (2030-2099) under various scenarios were inputted to the CROPWAT to determine changes in ETo and ETa from the baseline period. Spatial variability maps of both ETo and ETa as well as crop yields were created. Results indicated significant variation in seasonal rainfall depth during the baseline period and predicted decreasing trend in the future years that eventually affects yields. These maps enable us to generate appropriate adaptation measures and best management practices for small scale farmers under future climate change scenarios. KEY WORDS: Climate change, evapotranspiration, CROPWAT, yield, Nicaragua
NASA Astrophysics Data System (ADS)
O'Brien, Katherine R.; Weber, Tony R.; Leigh, Catherine; Burford, Michele A.
2016-12-01
Accurate reservoir budgets are important for understanding regional fluxes of sediment and nutrients. Here we present a comprehensive budget of sediment (based on total suspended solids, TSS), total nitrogen (TN) and total phosphorus (TP) for two subtropical reservoirs on rivers with highly intermittent flow regimes. The budget is completed from July 1997 to June 2011 on the Somerset and Wivenhoe reservoirs in southeast Queensland, Australia, using a combination of monitoring data and catchment model predictions. A major flood in January 2011 accounted for more than half of the water entering and leaving both reservoirs in that year, and approximately 30 % of water delivered to and released from Wivenhoe over the 14-year study period. The flood accounted for an even larger proportion of total TSS and nutrient loads: in Wivenhoe more than one-third of TSS inputs and two-thirds of TSS outputs between 1997 and 2011 occurred during January 2011. During non-flood years, mean historical concentrations provided reasonable estimates of TSS and nutrient loads leaving the reservoirs. Calculating loads from historical mean TSS and TP concentrations during January 2011, however, would have substantially underestimated outputs over the entire study period, by up to a factor of 10. The results have important implications for sediment and nutrient budgets in catchments with highly episodic flow. First, quantifying inputs and outputs during major floods is essential for producing reliable long-term budgets. Second, sediment and nutrient budgets are dynamic, not static. Characterizing uncertainty and variability is therefore just as important for meaningful reservoir budgets as accurate quantification of loads.
Mathematical Model of Solidification During Electroslag Casting of Pilger Roll
NASA Astrophysics Data System (ADS)
Liu, Fubin; Li, Huabing; Jiang, Zhouhua; Dong, Yanwu; Chen, Xu; Geng, Xin; Zang, Ximin
A mathematical model for describing the interaction of multiple physical fields in slag bath and solidification process in ingot during pilger roll casting with variable cross-section which is produced by the electroslag casting (ESC) process was developed. The commercial software ANSYS was applied to calculate the electromagnetic field, magnetic driven fluid flow, buoyancy-driven flow and heat transfer. The transportation phenomenon in slag bath and solidification characteristic of ingots are analyzed for variable cross-section with variable input power under the conditions of 9Cr3NiMo steel and 70%CaF2 - 30%Al2O3 slag system. The calculated results show that characteristic of current density distribution, velocity patterns and temperature profiles in the slag bath and metal pool profiles in ingot have distinct difference at variable cross-sections due to difference of input power and cooling condition. The pool shape and the local solidification time (LST) during Pilger roll ESC process are analyzed.
Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo
Fontaine, Bertrand; Peña, José Luis; Brette, Romain
2014-01-01
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397
A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds
Goldberg, Jesse H.
2012-01-01
The pallido-recipient thalamus transmits information from the basal ganglia (BG) to the cortex and plays a critical role motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the BG, but the role of non-pallidal inputs, such as excitatory inputs from cortex, is unclear. We have recorded simultaneously from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a BG-recipient thalamic nucleus necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone, and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor ‘cortical’ nucleus also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals important for exploratory behavior and learning. PMID:22327474
NASA Astrophysics Data System (ADS)
Ranasinghage, P. N.; Silva, A. N.; Vlahos, P.
2016-12-01
Inorganic `reactive' nutrients hold the highest importance in understanding the role of limiting nutrients in the ocean since they facilitate marine biological productivity and carbon sequestration that would eventually pave the way to regulate the biogeochemical climate feedbacks. Significant inorganic fractions are expected to be exported episodically to the ocean from fluvial fluxes though this is poorly understood. Thus, no considerable amounts of published work regarding the fluxes from Sri Lankan freshwater streams have ever been recorded. A study was carried out to quantify the contribution of Kelani, Kalu and Gin Rivers, three major rivers in the wet zone of Sri Lanka, in exporting major limiting nutrient fluxes to the Indian Ocean; to understand the significance of their variability patterns with rainfall and understand differences in their inputs. The study was conducted during the summer monsoonal period from late August to early November at two-three week intervals where water samples were collected for ammonia, nitrite, nitrate, orthophosphate, silica, sulfate and iron analysis by Colorimetric Spectroscopy. Discharge and rainfall data were retrieved from the Department of Irrigation and Department of Meteorology, Sri Lanka respectively. According to Two Way ANOVA, none of the individual fluxes showed significant differences (p>0.1) both in their temporal and spatial variability suggesting that studied rivers respond similarly in fluvial transportation owing to the similar rainfall intensities observed during the study period in the wet zone. Linear Regression Analysis indicates that only PO43- (p<0.01), SO42- (p<0.01) and NO2-(p<0.01 for Kelani and Kalu; 0.0.1Key words; nutrients, fluvial, fluxes, Redfield ratios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yiqi; Ahlström, Anders; Allison, Steven D.
Soil carbon (C) is a critical component of Earth system models (ESMs) and its diverse representations are a major source of the large spread across models in the terrestrial C sink from the 3rd to 5th assessment reports of the Intergovernmental Panel on Climate Change (IPCC). Improving soil C projections is of a high priority for Earth system modeling in the future IPCC and other assessments. To achieve this goal, we suggest that (1) model structures should reflect real-world processes, (2) parameters should be calibrated to match model outputs with observations, and (3) external forcing variables should accurately prescribe themore » environmental conditions that soils experience. Firstly, most soil C cycle models simulate C input from litter production and C release through decomposition. The latter process has traditionally been represented by 1st-order decay functions, regulated primarily by temperature, moisture, litter quality, and soil texture. While this formulation well captures macroscopic SOC dynamics, better understanding is needed of their underlying mechanisms as related to microbial processes, depth-dependent environmental controls, and other processes that strongly affect soil C dynamics. Secondly, incomplete use of observations in model parameterization is a major cause of bias in soil C projections from ESMs. Optimal parameter calibration with both pool- and flux-based datasets through data assimilation is among the highest priorities for near-term research to reduce biases among ESMs. Thirdly, external variables are represented inconsistently among ESMs, leading to differences in modeled soil C dynamics. We recommend the implementation of traceability analyses to identify how external variables and model parameterizations influence SOC dynamics in different ESMs. Overall, projections of the terrestrial C sink can be substantially improved when reliable datasets are available to select the most representative model structure, constrain parameters, and prescribe forcing fields.« less
Quasi-continuous stochastic simulation framework for flood modelling
NASA Astrophysics Data System (ADS)
Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas
2017-04-01
Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.
Toward more realistic projections of soil carbon dynamics by Earth system models
Luo, Y.; Ahlström, Anders; Allison, Steven D.; Batjes, Niels H.; Brovkin, V.; Carvalhais, Nuno; Chappell, Adrian; Ciais, Philippe; Davidson, Eric A.; Finzi, Adien; Georgiou, Katerina; Guenet, Bertrand; Hararuk, Oleksandra; Harden, Jennifer; He, Yujie; Hopkins, Francesca; Jiang, L.; Koven, Charles; Jackson, Robert B.; Jones, Chris D.; Lara, M.; Liang, J.; McGuire, A. David; Parton, William; Peng, Changhui; Randerson, J.; Salazar, Alejandro; Sierra, Carlos A.; Smith, Matthew J.; Tian, Hanqin; Todd-Brown, Katherine E. O; Torn, Margaret S.; van Groenigen, Kees Jan; Wang, Ying; West, Tristram O.; Wei, Yaxing; Wieder, William R.; Xia, Jianyang; Xu, Xia; Xu, Xiaofeng; Zhou, T.
2016-01-01
Soil carbon (C) is a critical component of Earth system models (ESMs), and its diverse representations are a major source of the large spread across models in the terrestrial C sink from the third to fifth assessment reports of the Intergovernmental Panel on Climate Change (IPCC). Improving soil C projections is of a high priority for Earth system modeling in the future IPCC and other assessments. To achieve this goal, we suggest that (1) model structures should reflect real-world processes, (2) parameters should be calibrated to match model outputs with observations, and (3) external forcing variables should accurately prescribe the environmental conditions that soils experience. First, most soil C cycle models simulate C input from litter production and C release through decomposition. The latter process has traditionally been represented by first-order decay functions, regulated primarily by temperature, moisture, litter quality, and soil texture. While this formulation well captures macroscopic soil organic C (SOC) dynamics, better understanding is needed of their underlying mechanisms as related to microbial processes, depth-dependent environmental controls, and other processes that strongly affect soil C dynamics. Second, incomplete use of observations in model parameterization is a major cause of bias in soil C projections from ESMs. Optimal parameter calibration with both pool- and flux-based data sets through data assimilation is among the highest priorities for near-term research to reduce biases among ESMs. Third, external variables are represented inconsistently among ESMs, leading to differences in modeled soil C dynamics. We recommend the implementation of traceability analyses to identify how external variables and model parameterizations influence SOC dynamics in different ESMs. Overall, projections of the terrestrial C sink can be substantially improved when reliable data sets are available to select the most representative model structure, constrain parameters, and prescribe forcing fields.
Photochemical free radical production rates in the eastern Caribbean
NASA Astrophysics Data System (ADS)
Dister, Brian; Zafiriou, Oliver C.
1993-02-01
Potential photochemical production rates of total (NO-scavengeable) free radicals were surveyed underway (> 900 points) in the eastern Caribbean and Orinoco delta in spring and fall 1988. These data document seasonal trends and large-scale (˜ 10-1000 km) variability in the pools of sunlight-generated reactive transients, which probably mediate a major portion of marine photoredox transformations. Radical production potential was detectable in all waters and was reasonably quantifiable at rates above 0.25 nmol L-1 min-1 sun-1. Radical production rates varied from ˜ 0.1-0.5 nmol L-1 min-1 of full-sun illumination in "blue water" to > 60 nmol L-1 min-1 in some estuarine waters in the high-flow season. Qualitatively, spatiotemporal potential rate distributions strikingly resembled that of "chlorophyll" (a riverine-influence tracer of uncertain specificity) in 1979-1981 CZCS images of the region [Müller-Karger et al., 1988] at all scales. Basin-scale occurrence of greatly enhanced rates in fall compared to spring is attributed to terrestrial chromophore inputs, primarily from the Orinoco River, any contributions from Amazon water and nutrient-stimulus effects could not be resolved. A major part of the functionally photoreactive colored organic matter (COM) involved in radical formation clearly mixes without massive loss out into high-salinity waters, although humic acids may flocculate in estuaries. A similar conclusion applies over smaller scales for COM as measured optically [Blough et al., this issue]. Furthermore, optical absorption and radical production rates were positively correlated in the estuarine region in fall. These cruises demonstrated that photochemical techniques are now adequate to treat terrestrial photochemical chromophore inputs as an estuarine mixing problem on a large scale, though the ancillary data base does not currently support such an analysis in this region. Eastern Caribbean waters are not markedly more reactive at comparable salinities than waters of the Gulf of Maine and North Atlantic Bight, despite large inputs of colored waters from two large tropical rivers with substantial "black water" tributaries. Other sources of reactive COM, such as grazing, sedimentary diagenesis, and "marine humus" may increase temperate waters' photoreactivity; alternatively, northern waters may be chromophore-rich because they are light-poor and photobleaching is a major sink of photoreactive COM.
Prestressed elastomer for energy storage
Hoppie, Lyle O.; Speranza, Donald
1982-01-01
Disclosed is a regenerative braking device for an automotive vehicle. The device includes a power isolating assembly (14), an infinitely variable transmission (20) interconnecting an input shaft (16) with an output shaft (18), and an energy storage assembly (22). The storage assembly includes a plurality of elastomeric rods (44, 46) mounted for rotation and connected in series between the input and output shafts. The elastomeric rods are prestressed along their rotational or longitudinal axes to inhibit buckling of the rods due to torsional stressing of the rods in response to relative rotation of the input and output shafts.
Feeney, Daniel F; Mani, Diba; Enoka, Roger M
2018-06-07
We investigated the associations between grooved pegboard times, force steadiness (coefficient of variation for force), and variability in an estimate of the common synaptic input to motor neurons innervating the wrist extensor muscles during steady contractions performed by young and older adults. The discharge times of motor units were derived from recordings obtained with high-density surface electrodes while participants performed steady isometric contractions at 10% and 20% of maximal voluntary contraction (MVC) force. The steady contractions were performed with a pinch grip and wrist extension, both independently (single action) and concurrently (double action). The variance in common synaptic input to motor neurons was estimated with a state-space model of the latent common input dynamics. There was a statistically significant association between the coefficient of variation for force during the steady contractions and the estimated variance in common synaptic input in young (r 2 = 0.31) and older (r 2 = 0.39) adults, but not between either the mean or the coefficient of variation for interspike interval of single motor units with the coefficient of variation for force. Moreover, the estimated variance in common synaptic input during the double-action task with the wrist extensors at the 20% target was significantly associated with grooved pegboard time (r 2 = 0.47) for older adults, but not young adults. These findings indicate that longer pegboard times of older adults were associated with worse force steadiness and greater fluctuations in the estimated common synaptic input to motor neurons during steady contractions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Remote media vision-based computer input device
NASA Astrophysics Data System (ADS)
Arabnia, Hamid R.; Chen, Ching-Yi
1991-11-01
In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.
A Model of Career Success: A Longitudinal Study of Emergency Physicians
ERIC Educational Resources Information Center
Pachulicz, Sarah; Schmitt, Neal; Kuljanin, Goran
2008-01-01
Objective and subjective career success were hypothesized to mediate the relationships between sociodemographic variables, human capital indices, individual difference variables, and organizational sponsorship as inputs and a retirement decision and intentions to leave either the specialty of emergency medicine (EM) or medicine as output…
Relationship between cotton yield and soil electrical conductivity, topography, and landsat imagery
USDA-ARS?s Scientific Manuscript database
Understanding spatial and temporal variability in crop yield is a prerequisite to implementing site-specific management of crop inputs. Apparent soil electrical conductivity (ECa), soil brightness, and topography are easily obtained data that can explain yield variability. The objectives of this stu...
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Kelkar, S. S.; Lee, F. C.
1983-01-01
A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.
Production Economics of Private Forestry: A Comparison of Industrial and Nonindustrial Forest Owners
David H. Newman; David N. Wear
1993-01-01
This paper compares the producrion behavior of industrial and nonindustrial private forestland owners in the southeastern U.S. using a restricted profit function. Profits are modeled as a function of two outputs, sawtimber and pulpwood. one variable input, regeneration effort. and two quasi-fixed inputs, land and growing stock. Although an identical profit function is...
Nitrogen input from residential lawn care practices in suburban watersheds in Baltimore county, MD
Neely L. Law; Lawrence E. Band; J. Morgan Grove
2004-01-01
A residential lawn care survey was conducted as part of the Baltimore Ecosystem Study, a Long-term Ecological Research project funded by the National Science Foundation and collaborating agencies, to estimate the nitrogen input to urban watersheds from lawn care practices. The variability in the fertilizer N application rates and the factors affecting the application...
Darold P. Batzer; Brian J. Palik
2007-01-01
Aquatic invertebrates are crucial components of foodwebs in seasonal woodland ponds, and leaf litter is probably the most important food resource for those organisms. We quantified the influence of leaf litter inputs on aquatic invertebrates in two seasonal woodland ponds using an interception experiment. Ponds were hydrologically split using a sandbag-plastic barrier...
J.W. Hornbeck; S.W. Bailey; D.C. Buso; J.B. Shanley
1997-01-01
Chemistry of precipitation and streamwater and resulting input-output budgets for nutrient ions were determined concurrently for three years on three upland, forested watersheds located within an 80 km radius in central New England. Chemistry of precipitation and inputs of nutrients via wet deposition were similar among the three watersheds and were generally typical...
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Variable input observer for structural health monitoring of high-rate systems
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob
2017-02-01
The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.
Family medicine outpatient encounters are more complex than those of cardiology and psychiatry.
Katerndahl, David; Wood, Robert; Jaén, Carlos Roberto
2011-01-01
comparison studies suggest that the guideline-concordant care provided for specific medical conditions is less optimal in primary care compared with cardiology and psychiatry settings. The purpose of this study is to estimate the relative complexity of patient encounters in general/family practice, cardiology, and psychiatry settings. secondary analysis of the 2000 National Ambulatory Medical Care Survey data for ambulatory patients seen in general/family practice, cardiology, and psychiatry settings was performed. The complexity for each variable was estimated as the quantity weighted by variability and diversity. there is minimal difference in the unadjusted input and total encounter complexity of general/family practice and cardiology; psychiatry's input is less complex. Cardiology encounters involved more input quantitatively, but the diversity of general/family practice input eliminated the difference. Cardiology also involved more complex output. However, when the duration of visit is factored in, the complexity of care provided per hour in general/family practice is 33% more relative to cardiology and 5 times more relative to psychiatry. care during family physician visits is more complex per hour than the care during visits to cardiologists or psychiatrists. This may account for a lower rate of completion of process items measured for quality of care.
Chien, Jung Hung; Mukherjee, Mukul; Siu, Ka-Chun; Stergiou, Nicholas
2016-05-01
When maintaining postural stability temporally under increased sensory conflict, a more rigid response is used where the available degrees of freedom are essentially frozen. The current study investigated if such a strategy is also utilized during more dynamic situations of postural control as is the case with walking. This study attempted to answer this question by using the Locomotor Sensory Organization Test (LSOT). This apparatus incorporates SOT inspired perturbations of the visual and the somatosensory system. Ten healthy young adults performed the six conditions of the traditional SOT and the corresponding six conditions on the LSOT. The temporal structure of sway variability was evaluated from all conditions. The results showed that in the anterior posterior direction somatosensory input is crucial for postural control for both walking and standing; visual input also had an effect but was not as prominent as the somatosensory input. In the medial lateral direction and with respect to walking, visual input has a much larger effect than somatosensory input. This is possibly due to the added contributions by peripheral vision during walking; in standing such contributions may not be as significant for postural control. In sum, as sensory conflict increases more rigid and regular sway patterns are found during standing confirming the previous results presented in the literature, however the opposite was the case with walking where more exploratory and adaptive movement patterns are present.
A dual-input nonlinear system analysis of autonomic modulation of heart rate
NASA Technical Reports Server (NTRS)
Chon, K. H.; Mullen, T. J.; Cohen, R. J.
1996-01-01
Linear analyses of fluctuations in heart rate and other hemodynamic variables have been used to elucidate cardiovascular regulatory mechanisms. The role of nonlinear contributions to fluctuations in hemodynamic variables has not been fully explored. This paper presents a nonlinear system analysis of the effect of fluctuations in instantaneous lung volume (ILV) and arterial blood pressure (ABP) on heart rate (HR) fluctuations. To successfully employ a nonlinear analysis based on the Laguerre expansion technique (LET), we introduce an efficient procedure for broadening the spectral content of the ILV and ABP inputs to the model by adding white noise. Results from computer simulations demonstrate the effectiveness of broadening the spectral band of input signals to obtain consistent and stable kernel estimates with the use of the LET. Without broadening the band of the ILV and ABP inputs, the LET did not provide stable kernel estimates. Moreover, we extend the LET to the case of multiple inputs in order to accommodate the analysis of the combined effect of ILV and ABP effect on heart rate. Analyzes of data based on the second-order Volterra-Wiener model reveal an important contribution of the second-order kernels to the description of the effect of lung volume and arterial blood pressure on heart rate. Furthermore, physiological effects of the autonomic blocking agents propranolol and atropine on changes in the first- and second-order kernels are also discussed.
He, Songjie; Xu, Y Jun
2015-01-15
This study investigated long-term (1980-2009) yields and variability of total organic carbon (TOC) from four major coastal rivers in Louisiana entering the Northern Gulf of Mexico where a large-area summer hypoxic zone has been occurring since the middle 1980s. Two of these rivers drain agriculture-intensive (>40%) watersheds, while the other two rivers drain forest-pasture dominated (>50%) watersheds. The study found that these rivers discharged a total of 13.0×10(4)t TOC annually, fluctuating from 5.9×10(4) to 22.8×10(4)t. Seasonally, the rivers showed high TOC yield during the winter and early spring months, corresponding to the seasonal trend of river discharge. While river hydrology controlled TOC yields, land use has played an important role in fluxes, seasonal variations, and characteristics of TOC. The findings fill in a critical information gap of quantity and quality of organic carbon transport from coastal watersheds to one of the world's largest summer hypoxic zones. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Garay, M. J.; Bull, M. A.; Witek, M. L.; Diner, D. J.; Seidel, F.
2017-12-01
Since early 2000, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite has been providing operational Level 2 (swath-based) aerosol optical depth (AOD) and particle property retrievals at 17.6 km spatial resolution and atmospherically corrected land surface products at 1.1 km resolution. A major, multi-year development effort has led to the release of updated operational MISR Level 2 aerosol and land surface retrieval products. The spatial resolution of the aerosol product has been increased to 4.4 km, allowing more detailed characterization of aerosol spatial variability, especially near local sources and in urban areas. The product content has been simplified and updated to include more robust measures of retrieval uncertainty and other fields to benefit users. The land surface product has also been updated to incorporate the Version 23 aerosol product as input and to improve spatial coverage, particularly over mountainous terrain and snow/ice-covered surfaces. We will describe the major upgrades incorporated in Version 23, present validation of the aerosol product, and describe some of the applications enabled by these product updates.
Glaciation and Hydrologic Variability in Tropical South America During the Last 400,000 Years
NASA Astrophysics Data System (ADS)
Fritz, S. C.; Baker, P. A.; Seltzer, G. O.; Ekdahl, E. J.; Ballantyne, A.
2005-12-01
The expansion and contraction of northern continental ice sheets is a fundamental characteristic of the Quaternary. However, the extent of tropical glaciation is poorly constrained, particularly for periods prior to the Last Glacial Maximum (LGM). Similarly, the magnitude and timing of hydrologic variation in tropical South America is not clearly defined over multiple glacial cycles. Thus, the relative roles of global temperature change and insolation control of the South American Summer Monsoon (SASM) are unclear. We have reconstructed the timing of glaciation and precipitation variability in the tropical Andes of South America from drill cores from Lake Titicaca, Bolivia/Peru. The longest core (site LT01-2B, 235 m water depth) is 136 m and consists of four major silt-dominated units with high magnetic susceptibility, low organic carbon concentration, and no carbonate, which are indicative of extensive glacial activity in the cordillera surrounding the lake. These units alternate with laminated low-susceptibility units, with high carbonate and organic carbon concentrations, which reflect times when detrital input from the watershed was low and lake-level was lowered to below the outlet threshold, driving carbonate precipitation. Thus, the stratigraphy suggests that the core spans four major periods of glaciation and the subsequent interstadials. Core chronology is based on radiocarbon in the uppermost 25m, U-series dates on aragonite laminae, and tuning of the calcium carbonate stratigraphy in the lowermost sediments to the Vostok CO2 record. High-resolution (ca. 100 yr) sampling of sediments spanning the last glacial stage shows distinct millennial-scale variability from 20 - 65 kyr BP. This variability is evident in the periodic deposition of turbidites, which are characterized by low biogenic silica concentrations, elevated benthic diatom abundances, heavy carbon isotopic values, high C/N ratios, and an increase in mean grain size - a composite signal indicative of enhanced input to this deepwater site of material originally deposited in nearshore regions of the lake. U-series ages at the top of the penultimate (pre-Holocene) unit of laminated sediments suggest that the last major low stand of Lake Titicaca dates from MIS 5.5. Diatom data indicate that this was the most saline interval in the recovered sequence and thus suggest that MIS5.5 was the time of maximum aridity. The tuned drill-core magnetic susceptibility record suggests that glacial stages in the tropical Andes were approximately synchronous with high-latitude glacial stages and globally cold climate, with increased glacial activity in the periods 370-322, 300-238, 230-213, 188-139, and 65-15 kyr BP. Overall, the intervals of increased glaciation are periods when Lake Titicaca was deep, fresh, and overflowing, as inferred from calcium carbonate concentration, carbon isotopic values, and the diatom composition. The timing of lake-level change relative to high-latitude climate and insolation variation suggests that the water balance of the tropical Andes was at least as strongly influenced by global temperature change and global-scale boundary conditions as by insolation control of the SASM.
NASA Astrophysics Data System (ADS)
Beem-Miller, Jeffrey; Lehmann, Johannes
2017-04-01
The majority of the world's soil organic carbon (OC) stock is stored below 30 cm in depth, yet sampling for soil OC assessment rarely goes below 30 cm. Recent studies suggest that subsoil OC is distinct from topsoil OC in quantity and quality: subsoil OC concentrations are typically much lower and turnover times are much longer, but the mechanisms involved in retention and input of OC to the subsoil are not well understood. Improving our understanding of subsoil OC is essential for balancing the global carbon budget and confronting the challenge of global climate change. This study was undertaken to assess the relationship between OC stock and potential drivers of OC dynamics, including both soil properties and environmental covariates, in topsoil (0 to 30 cm) versus subsoil (30 to 75 cm). The performance of commonly used depth functions in predicting OC stock from 0 to 75 cm was also assessed. Depth functions are a useful tool for extrapolating OC stock below the depth of sampling, but may poorly model "hot spots" of OC accumulation, and be inadequate for modelling the distinct dynamics of topsoil and subsoil OC when applied with a single functional form. We collected two hundred soil cores on an arable Mollisol, sectioned into five depth increments (0-10, 10-20, 20-30, 30-50, and 50-75 cm), and performed the following analyses on each depth increment: concentration of OC, inorganic C, permanganate oxidizable carbon (POXC), and total N, as well as texture, pH, and bulk density; a digital elevation model was used to calculate elevation, slope, curvature, and soil topographic wetness index. We found that topsoil OC stocks were significantly correlated (p < 0.05) with terrain variables, texture, and pH, while subsoil OC stock was only significantly correlated with topsoil OC stock and soil pH. Total OC stock was highly spatially variable, and the relationship between surface soil properties, terrain variables, and subsoil OC stock was spatially variable as well. Hot spots of subsoil OC accumulation were correlated with higher pH (> 7.0), flat topography, a high OC to total N ratio, and a high ratio of POXC to OC. These findings suggest that at this site, topsoil OC stock is input driven, while OC accumulation in the subsoil is retention dominated. Accordingly, a new depth function is proposed that uses a linear relationship to model OC stock in topsoil and a power function to model OC stock in the subsoil. The combined depth function performed better than did negative exponential, power, and linear functions alone.
NASA Astrophysics Data System (ADS)
Mori, Ryuhei
2016-11-01
Brassard et al. [Phys. Rev. Lett. 96, 250401 (2006), 10.1103/PhysRevLett.96.250401] showed that shared nonlocal boxes with a CHSH (Clauser, Horne, Shimony, and Holt) probability greater than 3/+√{6 } 6 yield trivial communication complexity. There still exists a gap with the maximum CHSH probability 2/+√{2 } 4 achievable by quantum mechanics. It is an interesting open question to determine the exact threshold for the trivial communication complexity. Brassard et al.'s idea is based on recursive bias amplification by the three-input majority function. It was not obvious if another choice of function exhibits stronger bias amplification. We show that the three-input majority function is the unique optimal function, so that one cannot improve the threshold 3/+√{6 } 6 by Brassard et al.'s bias amplification. In this work, protocols for computing the function used for the bias amplification are restricted to be nonadaptive protocols or a particular adaptive protocol inspired by Pawłowski et al.'s protocol for information causality [Nature (London) 461, 1101 (2009), 10.1038/nature08400]. We first show an adaptive protocol inspired by Pawłowski et al.'s protocol, and then show that the adaptive protocol improves upon nonadaptive protocols. Finally, we show that the three-input majority function is the unique optimal function for the bias amplification if we apply the adaptive protocol to each step of the bias amplification.
Effects of urbanization on groundwater evolution in an urbanizing watershed
NASA Astrophysics Data System (ADS)
Reyes, D.; Banner, J. L.; Bendik, N.
2011-12-01
The Jollyville Plateau Salamander (Eurycea tonkawae), a candidate species for listing under the Endangered Species Act, is endemic to springs and caves within the Bull Creek Watershed of Austin, Texas. Rapid urbanization endangers known populations of this salamander. Conservation strategies lack information on the extent of groundwater contamination from anthropogenic sources in this karst watershed. Spring water was analyzed for strontium (Sr) isotopes and major ions from sites classified as "urban" or "rural" based on impervious cover estimates. Previous studies have shown that the 87Sr/86Sr value of municipal water is significantly higher than values for natural streamwater, which are similar to those for the Cretaceous limestone bedrock of the region's watersheds. We investigate the application of this relationship to understanding the effects of urbanization on groundwater quality. The use of Sr isotopes as hydrochemical tracers is complemented by major ion concentrations, specifically the dominant ions in natural groundwater (Ca and HCO3) and the ions associated with the addition of wastewater (Na and Cl). To identify high priority salamander-inhabited springs for water quality remediation, we explore the processes controlling the chemical evolution of groundwater such as municipal water inputs, groundwater-soil interactions, and solution/dissolution reactions. 87Sr/86Sr values for water samples from within the watershed range from 0.70760 to 0.70875, the highest values corresponding to sites located in the urbanized areas of the watershed. Analyses of the covariation of Sr isotopes with major ion concentrations help elucidate controls on spring water evolution. Springs located in rural portions of the watershed have low 87Sr/86Sr, high concentrations of Ca and HCO3, and low concentrations of Na and Cl. This is consistent with small inputs of municipal water. Three springs located in urban portions of the watershed have high 87Sr/86Sr, low Ca and HCO3, and high Na and Cl. This is consistent with large inputs of municipal water. The other five springs located in urban portions have low 87Sr/86Sr, low concentrations of Ca and HCO3, and high concentrations of Na and Cl. This is reflects a process other than an input of municipal water. Groundwater interaction with soils generally results in higher Na concentrations relative to Ca. 87Sr/86Sr values in this scenario may increase or decrease, depending on the Sr isotope variability of the local soils. Alternatively, precipitation of calcite from groundwater would decrease the concentration of Ca without necessarily decreasing 87Sr/86Sr values. The results suggest more anthropogenic water in urban springs than rural springs. These data serve to identify sources of spring recharge, including better constraints on the location(s) of urban leakage.
NASA Astrophysics Data System (ADS)
Pamplona, Fábio Campos; Paes, Eduardo Tavares; Nepomuceno, Aguinaldo
2013-11-01
The temporal and spatial variability of dissolved inorganic nutrients (NO3-, NO2-, NH4+, PO43- and DSi), total nitrogen (TN), total phosphorus (TP), nutrient ratios, suspended particulate matter (SPM) and Chlorophyll-a (Chl-a) were evaluated for the macrotidal estuarine mangrove system of the Quatipuru river (QUATIES), east Amazon coast, North Brazil. Temporal variability was assessed by fortnightly sampling at a fixed station within the middle portion of the estuary, from November 2009 to November 2010. Spatial variability was investigated from two field surveys conducted in November 2009 (dry season) and May 2010 (rainy season), along the salinity gradient of the system. The average DIN (NO3- + NO2- + NH4+) concentration of 9 μM in the dry season was approximately threefold greater in comparison to the rainy season. NH4+ was the main form of DIN in the dry season, while NO3- predominated in the rainy season. The NH4+ concentrations in the water column during the dry season are largely attributed to release by tidal wash-out of the anoxic interstitial waters of the surficial mangrove sediments. On the other hand, the higher NO3- levels during the wet season, suggested that both freshwater inputs and nitrification processes in the water column acted in concert. The river PO43- concentrations (DIP < 1 μM) were low and similar throughout the year. DIN was thus responsible for the major temporal and spatial variability of the dissolved DIN:DIP (N:P) molar ratios and nitrogen corresponded, in general, to the prime limiting nutrient for the sustenance of phytoplankton biomass in the estuary. During the dry season, P-limitation was detected in the upper estuary. PO43- adsorption to SPM was detected during the rainy season and desorption during the dry season along the salinity gradient. In general, the average Chl-a level (14.8 μg L-1) was 2.5 times higher in the rainy season than in the dry season (5.9 μg L-1). On average levels reached maxima at about 14 km from the estuaries' mouth, but shifts of the maximum Chl-a zone were subject to a dynamic displacement influenced by the tidal regime and seasonality of freshwater input. Our results showed that the potential phytoplankton productivity in QUATIES was subject to temporal and spatial variability between N and P limitation. The mangrove forests also played a relevant role as a nutrient source as established by the high variability of the nutrient behaviour along the estuarine gradient, consequently affecting the productivity in QUATIES.
WATER LEVEL AND OXYGEN DELIVERY/UTILIZATION IN POROUS SALT MARSH SEDIMENTS
Increasing terrestrial nutrient inputs to coastal waters is a global water quality issue worldwide, and salt marshes may provide a valuable nutrient buffer, either by direct removal or by smoothing out pulse inputs between sources and sensitive estuarine habitats. A major challen...
Hydrology in a peaty high marsh: hysteretic flow and biogeochemical implications
Terrestrial nutrient input to coastal waters is a critical water quality problem worldwide, and salt marshes may provide a valuable nutrient buffer (either by removal or by smoothing out pulse inputs) between terrestrial sources and sensitive estuarine habitats. One of the major...
Sedimentation in the chaparral: how do you handle unusual events?
Raymond M. Rice
1982-01-01
Abstract - Processes of erosion and sedimentation in steep chaparral drainage basins of southern California are described. The word ""hyperschedastic"" is coined to describe the sedimentation regime which is highly variable because of the interaction of marginally stable drainage basins, great variability in storm inputs, and the random occurrence...
NASA Astrophysics Data System (ADS)
Fioretti, Guido
2007-02-01
The productions function maps the inputs of a firm or a productive system onto its outputs. This article expounds generalizations of the production function that include state variables, organizational structures and increasing returns to scale. These extensions are needed in order to explain the regularities of the empirical distributions of certain economic variables.
Guo, Weixing; Langevin, C.D.
2002-01-01
This report documents a computer program (SEAWAT) that simulates variable-density, transient, ground-water flow in three dimensions. The source code for SEAWAT was developed by combining MODFLOW and MT3DMS into a single program that solves the coupled flow and solute-transport equations. The SEAWAT code follows a modular structure, and thus, new capabilities can be added with only minor modifications to the main program. SEAWAT reads and writes standard MODFLOW and MT3DMS data sets, although some extra input may be required for some SEAWAT simulations. This means that many of the existing pre- and post-processors can be used to create input data sets and analyze simulation results. Users familiar with MODFLOW and MT3DMS should have little difficulty applying SEAWAT to problems of variable-density ground-water flow.
Trends in Solidification Grain Size and Morphology for Additive Manufacturing of Ti-6Al-4V
NASA Astrophysics Data System (ADS)
Gockel, Joy; Sheridan, Luke; Narra, Sneha P.; Klingbeil, Nathan W.; Beuth, Jack
2017-12-01
Metal additive manufacturing (AM) is used for both prototyping and production of final parts. Therefore, there is a need to predict and control the microstructural size and morphology. Process mapping is an approach that represents AM process outcomes in terms of input variables. In this work, analytical, numerical, and experimental approaches are combined to provide a holistic view of trends in the solidification grain structure of Ti-6Al-4V across a wide range of AM process input variables. The thermal gradient is shown to vary significantly through the depth of the melt pool, which precludes development of fully equiaxed microstructure throughout the depth of the deposit within any practical range of AM process variables. A strategy for grain size control is demonstrated based on the relationship between melt pool size and grain size across multiple deposit geometries, and additional factors affecting grain size are discussed.
Analysis of health in health centers area in Depok using correspondence analysis and scan statistic
NASA Astrophysics Data System (ADS)
Basir, C.; Widyaningsih, Y.; Lestari, D.
2017-07-01
Hotspots indicate area that has a higher case intensity than others. For example, in health problems of an area, the number of sickness of a region can be used as parameter and condition of area that determined severity of an area. If this condition is known soon, it can be overcome preventively. Many factors affect the severity level of area. Some health factors to be considered in this study are the number of infant with low birth weight, malnourished children under five years old, under five years old mortality, maternal deaths, births without the help of health personnel, infants without handling the baby's health, and infant without basic immunization. The number of cases is based on every public health center area in Depok. Correspondence analysis provides graphical information about two nominal variables relationship. It create plot based on row and column scores and show categories that have strong relation in a close distance. Scan Statistic method is used to examine hotspot based on some selected variables that occurred in the study area; and Correspondence Analysis is used to picturing association between the regions and variables. Apparently, using SaTScan software, Sukatani health center is obtained as a point hotspot; and Correspondence Analysis method shows health centers and the seven variables have a very significant relationship and the majority of health centers close to all variables, except Cipayung which is distantly related to the number of pregnant mother death. These results can be used as input for the government agencies to upgrade the health level in the area.
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan
2015-10-01
Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ide, Toshiki; Hofmann, Holger F.; JST-CREST, Graduate School of Advanced Sciences of Matter, Hiroshima University, Kagamiyama 1-3-1, Higashi Hiroshima 739-8530
The information encoded in the polarization of a single photon can be transferred to a remote location by two-channel continuous-variable quantum teleportation. However, the finite entanglement used in the teleportation causes random changes in photon number. If more than one photon appears in the output, the continuous-variable teleportation accidentally produces clones of the original input photon. In this paper, we derive the polarization statistics of the N-photon output components and show that they can be decomposed into an optimal cloning term and completely unpolarized noise. We find that the accidental cloning of the input photon is nearly optimal at experimentallymore » feasible squeezing levels, indicating that the loss of polarization information is partially compensated by the availability of clones.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Haixia; Zhang, Jing
We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme losesmore » the output of phase-conjugate clones and is regarded as irreversible quantum cloning.« less
Computer program for design analysis of radial-inflow turbines
NASA Technical Reports Server (NTRS)
Glassman, A. J.
1976-01-01
A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi
2015-08-31
The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
Optimization of a GO2/GH2 Swirl Coaxial Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
1999-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.
Environmental efficiency of alternative dairy systems: a productive efficiency approach.
Toma, L; March, M; Stott, A W; Roberts, D J
2013-01-01
Agriculture across the globe needs to produce "more with less." Productivity should be increased in a sustainable manner so that the environment is not further degraded, management practices are both socially acceptable and economically favorable, and future generations are not disadvantaged. The objective of this paper was to compare the environmental efficiency of 2 divergent strains of Holstein-Friesian cows across 2 contrasting dairy management systems (grazing and nongrazing) over multiple years and so expose any genetic × environment (G × E) interaction. The models were an extension of the traditional efficiency analysis to account for undesirable outputs (pollutants), and estimate efficiency measures that allow for the asymmetric treatment of desirable outputs (i.e., milk production) and undesirable outputs. Two types of models were estimated, one considering production inputs (land, nitrogen fertilizers, feed, and cows) and the other not, thus allowing the assessment of the effect of inputs by comparing efficiency values and rankings between models. Each model type had 2 versions, one including 2 types of pollutants (greenhouse gas emissions, nitrogen surplus) and the other 3 (greenhouse gas emissions, nitrogen surplus, and phosphorus surplus). Significant differences were found between efficiency scores among the systems. Results indicated no G × E interaction; however, even though the select genetic merit herd consuming a diet with a higher proportion of concentrated feeds was most efficient in the majority of models, cows of the same genetic merit on higher forage diets could be just as efficient. Efficiency scores for the low forage groups were less variable from year to year, which reflected the uniformity of purchased concentrate feeds. The results also indicate that inputs play an important role in the measurement of environmental efficiency of dairy systems and that animal health variables (incidence of udder health disorders and body condition score) have a significant effect on the environmental efficiency of each dairy system. We conclude that traditional narrow measures of performance may not always distinguish dairy farming systems best fitted to future requirements. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Shoe inserts and orthotics for sport and physical activities.
Nigg, B M; Nurse, M A; Stefanyshyn, D J
1999-07-01
The purposes of this paper were to discuss the perceived benefits of inserts and orthotics for sport activities and to propose a new concept for inserts and orthotics. There is evidence that inserts or orthotics reduce or prevent movement-related injuries. However, there is limited knowledge about the specific functioning an orthotic or insert provides. The same orthotic or insert is often proposed for different problems. Changes in skeletal movement due to inserts or orthotics seem to be small and not systematic. Based on the results of a study using bone pins, one may question the idea that a major function of orthotics or inserts consists in aligning the skeleton. Impact cushioning with shoe inserts or orthotics is typically below 10%. Such small reductions might not be important for injury reduction. It has been suggested that changes in material properties might produce adjustments in the muscular response of the locomotor system. The foot has various sensors to detect input signals with subject specific thresholds. Subjects with similar sensitivity threshold levels seem to respond in their movement pattern in a similar way. Comfort is an important variable. From a biomechanical point of view, comfort may be related to fit, additional stabilizing muscle work, fatigue, and damping of soft tissue vibrations. Based on the presented evidence, the concept of minimizing muscle work is proposed when using orthotics or inserts. A force signal acts as an input variable on the shoe. The shoe sole acts as a first filter, the insert or orthotic as a second filter, the plantar surface of the foot as a third filter for the force input signal. The filtered information is transferred to the central nervous system that provides a subject specific dynamic response. The subject performs the movement for the task at hand. For a given movement task, the skeleton has a preferred path. If an intervention supports/counteracts the preferred movement path, muscle activity can/must be reduced/increased. Based on this concept, an optimal insert or orthotic would reduce muscle activity, feel comfortable, and should increase performance.
Salinity and hydrology of closed lakes
Langbein, Walter Basil
1961-01-01
Lakes without outlets, called closed lakes, are exclusively features of the arid and semiarid zones where annual evaporation exceeds rainfall. The number of closed lakes increases with aridity, so there are relatively few perennial closed lakes, but "dry" lakes that rarely contain water are numerous.Closed lakes fluctuate in level to a much greater degree than the open lakes of the humid zone, because variations in inflow can be compensated only by changes in surface area. Since the variability of inflow increases with aridity, it is possible to derive an approximate relationship for the coefficient of variation of lake area in terms of data on rates of evaporation, lake area, lake depth, and drainage area.The salinity of closed lakes is highly variable, ranging from less than 1 percent to over 25 percent by weight of salts. Some evidence suggests that the tonnage of salts in a lake solution is substantially less than the total input of salts into the lake over the period of existence of the closed lake. This evidence suggests further that the salts in a lake solution represent a kind of long-term balance between factors of gain and loss of salts from the solution.Possible mechanisms for the loss of salts dissolved in the lake include deposition in marginal bays, entrapment in sediments, and removal by wind. Transport of salt from the lake surface in wind spray is also a contributing, but seemingly not major, factor.The hypothesis of a long-term balance between input to and losses from the lake solution is checked by deriving a formula for the equilibrium concentration and comparing the results with the salinity data. The results indicate that the reported salinities seemingly can be explained in terms of their geometric properties and hydrologic environment.The time for accumulation of salts in the lake solution the ratio between mass of salts in the solution and the annual input may also be estimated from the geometric and hydrologic factors, in the absence of data on the salt content of the lake or of the inflow.
Mix or un-mix? Trace element segregation from a heterogeneous mantle, simulated.
NASA Astrophysics Data System (ADS)
Katz, R. F.; Keller, T.; Warren, J. M.; Manley, G.
2016-12-01
Incompatible trace-element concentrations vary in mid-ocean ridge lavas and melt inclusions by an order of magnitude or more, even in samples from the same location. This variability has been attributed to channelised melt flow [Spiegelman & Kelemen, 2003], which brings enriched, low-degree melts to the surface in relative isolation from depleted inter-channel melts. We re-examine this hypothesis using a new melting-column model that incorporates mantle volatiles [Keller & Katz 2016]. Volatiles cause a deeper onset of channelisation: their corrosivity is maximum at the base of the silicate melting regime. We consider how source heterogeneity and melt transport shape trace-element concentrations in basaltic lavas. We use both equilibrium and non-equilibrium formulations [Spiegelman 1996]. In particular, we evaluate the effect of melt transport on probability distributions of trace element concentration, comparing the inflow distribution in the mantle with the outflow distribution in the magma. Which features of melt transport preserve, erase or overprint input correlations between elements? To address this we consider various hypotheses about mantle heterogeneity, allowing for spatial structure in major components, volatiles and trace elements. Of interest are the roles of wavelength, amplitude, and correlation of heterogeneity fields. To investigate how different modes of melt transport affect input distributions, we compare melting models that produce either shallow or deep channelisation, or none at all.References:Keller & Katz (2016). The Role of Volatiles in Reactive Melt Transport in the Asthenosphere. Journal of Petrology, http://doi.org/10.1093/petrology/egw030. Spiegelman (1996). Geochemical consequences of melt transport in 2-D: The sensitivity of trace elements to mantle dynamics. Earth and Planetary Science Letters, 139, 115-132. Spiegelman & Kelemen (2003). Extreme chemical variability as a consequence of channelized melt transport. Geochemistry Geophysics Geosystems, http://doi.org/10.1029/2002GC000336
NASA Technical Reports Server (NTRS)
Salvucci, Guido D.
2000-01-01
The overall goal of this research is to examine the feasibility of applying a newly developed diagnostic model of soil water evaporation to large land areas using remotely sensed input parameters. The model estimates the rate of soil evaporation during periods when it is limited by the net transport resulting from competing effects of capillary rise and drainage. The critical soil hydraulic properties are implicitly estimated via the intensity and duration of the first stage (energy limited) evaporation, removing a major obstacle in the remote estimation of evaporation over large areas. This duration, or 'time to drying' (t(sub d)) is revealed through three signatures detectable in time series of remote sensing variables. The first is a break in soil albedo that occurs as a small vapor transmission zone develops near the surface. The second is a break in either surface to air temperature differences or in the diurnal surface temperature range, both of which indicate increased sensible heat flux (and/or storage) required to balance the decrease in latent heat flux. The third is a break in the temporal pattern of near surface soil moisture. Soil moisture tends to decrease rapidly during stage I drying (as water is removed from storage), and then become more or less constant during soil limited, or 'stage II' drying (as water is merely transmitted from deeper soil storage). The research tasks address: (1) improvements in model structure, including extensions to transpiration and aggregation over spatially variable soil and topographic landscape attributes; and (2) applications of the model using remotely sensed input parameters.
Surface Waves as Major Controls on Particle Backscattering in Southern California Coastal Waters
NASA Astrophysics Data System (ADS)
Henderikx Freitas, F.; Fields, E.; Maritorena, S.; Siegel, D.
2016-02-01
Satellite observations of particle loads and optical backscattering coefficients (bbp) in the Southern California Bight (SCB) have been thought to be driven by episodic inputs from storm runoff. Here we show however that surface waves have a larger role in controlling remotely sensed bbp values than previously considered. More than 14 years of 2-km resolution SeaWiFS, MODIS and MERIS satellite imagery spectrally-merged with the Garver-Siegel-Maritorena bio-optical model were used to assess the relative importance of terrestrial runoff and surface wave forcings in determining changes in particle load in the SCB. The space-time distributions of particle backscattering at 443nm and chlorophyll concentration estimates from the model were analyzed using Empirical Orthogonal Function analysis, and patterns were compared with several environmental variables. While offshore values of bbp are tightly related to chlorophyll concentrations, as expected for productive Case-1 waters, values of bbp in a 10km band near the coast are primarily modulated by surface waves. The relationship with waves holds throughout all seasons and is most apparent around the 40m isobath, but extends offshore until about 100m in depth. Riverine inputs are associated with elevated bbp near the coast mostly during the larger El Nino events of 1997/1998 and 2005. These findings are consistent with bio-optical glider and field observations from the Santa Barbara Channel taken as part of the Santa Barbara Coastal Long-Term Ecological Research and Plumes and Blooms programs. The implication of surface waves determining bbp variability beyond the surf zone has large consequences for the life cycle of many marine organisms, as well as for the interpretation of remote sensing signals near the coast.
NASA Technical Reports Server (NTRS)
Salvucci, Guido D.
1997-01-01
The overall goal of this research is to examine the feasibility of applying a newly developed diagnostic model of soil water evaporation to large land areas using remotely sensed input parameters. The model estimates the rate of soil evaporation during periods when it is limited by the net transport resulting from competing effects of capillary rise and drainage. The critical soil hydraulic properties are implicitly estimated via the intensity and duration of the first stage (energy limited) evaporation, removing a major obstacle in the remote estimation of evaporation over large areas. This duration, or "time to drying" (t(sub d)), is revealed through three signatures detectable in time series of remote sensing variables. The first is a break in soil albedo that occurs as a small vapor transmission zone develops near the surface. The second is a break in either surface to air temperature differences or in the diurnal surface temperature range, both of which indicate increased sensible heat flux (and/or storage) required to balance the decrease in latent heat flux. The third is a break in the temporal pattern of near surface soil moisture. Soil moisture tends to decrease rapidly during stage 1 drying (as water is removed from storage), and then become more or less constant during soil limited, or "stage 2" drying (as water is merely transmitted from deeper soil storage). The research tasks address: (1) improvements in model structure, including extensions to transpiration and aggregation over spatially variable soil and topographic landscape attributes; and (2) applications of the model using remotely sensed input parameters.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Artificial neural network modeling of dissolved oxygen in the Heihe River, Northwestern China.
Wen, Xiaohu; Fang, Jing; Diao, Meina; Zhang, Chuanqi
2013-05-01
Identification and quantification of dissolved oxygen (DO) profiles of river is one of the primary concerns for water resources managers. In this research, an artificial neural network (ANN) was developed to simulate the DO concentrations in the Heihe River, Northwestern China. A three-layer back-propagation ANN was used with the Bayesian regularization training algorithm. The input variables of the neural network were pH, electrical conductivity, chloride (Cl(-)), calcium (Ca(2+)), total alkalinity, total hardness, nitrate nitrogen (NO3-N), and ammonical nitrogen (NH4-N). The ANN structure with 14 hidden neurons obtained the best selection. By making comparison between the results of the ANN model and the measured data on the basis of correlation coefficient (r) and root mean square error (RMSE), a good model-fitting DO values indicated the effectiveness of neural network model. It is found that the coefficient of correlation (r) values for the training, validation, and test sets were 0.9654, 0.9841, and 0.9680, respectively, and the respective values of RMSE for the training, validation, and test sets were 0.4272, 0.3667, and 0.4570, respectively. Sensitivity analysis was used to determine the influence of input variables on the dependent variable. The most effective inputs were determined as pH, NO3-N, NH4-N, and Ca(2+). Cl(-) was found to be least effective variables on the proposed model. The identified ANN model can be used to simulate the water quality parameters.
Unsteady Aerodynamic Testing Using the Dynamic Plunge Pitch and Roll Model Mount
NASA Technical Reports Server (NTRS)
Lutze, Frederick H.; Fan, Yigang
1999-01-01
A final report on the DyPPiR tests that were run are presented. Essentially it consists of two parts, a description of the data reduction techniques and the results. The data reduction techniques include three methods that were considered: 1) signal processing of wind on - wind off data; 2) using wind on data in conjunction with accelerometer measurements; and 3) using a dynamic model of the sting to predict the sting oscillations and determining the aerodynamic inputs using an optimization process. After trying all three, we ended up using method 1, mainly because of its simplicity and our confidence in its accuracy. The results section consists of time history plots of the input variables (angle of attack, roll angle, and/or plunge position) and the corresponding time histories of the output variables, C(sub L), C(sub D), C(sub m), C(sub l), C(sub m), C(sub n). Also included are some phase plots of one or more of the output variable vs. an input variable. Typically of interest are pitch moment coefficient vs. angle of attack for an oscillatory motion where the hysteresis loops can be observed. These plots are useful to determine the "more interesting" cases. Samples of the data as it appears on the disk are presented at the end of the report. The last maneuver, a rolling pull up, is indicative of the unique capabilities of the DyPPiR, allowing combinations of motions to be exercised at the same time.
QKD Via a Quantum Wavelength Router Using Spatial Soliton
NASA Astrophysics Data System (ADS)
Kouhnavard, M.; Amiri, I. S.; Afroozeh, A.; Jalil, M. A.; Ali, J.; Yupapin, P. P.
2011-05-01
A system for continuous variable quantum key distribution via a wavelength router is proposed. The Kerr type of light in the nonlinear microring resonator (NMRR) induces the chaotic behavior. In this proposed system chaotic signals are generated by an optical soliton or Gaussian pulse within a NMRR system. The parameters, such as input power, MRRs radii and coupling coefficients can change and plays important role in determining the results in which the continuous signals are generated spreading over the spectrum. Large bandwidth signals of optical soliton are generated by the input pulse propagating within the MRRs, which is allowed to form the continuous wavelength or frequency with large tunable channel capacity. The continuous variable QKD is formed by using the localized spatial soliton pulses via a quantum router and networks. The selected optical spatial pulse can be used to perform the secure communication network. Here the entangled photon generated by chaotic signals has been analyzed. The continuous entangled photon is generated by using the polarization control unit incorporating into the MRRs, required to provide the continuous variable QKD. Results obtained have shown that the application of such a system for the simultaneous continuous variable quantum cryptography can be used in the mobile telephone hand set and networks. In this study frequency band of 500 MHz and 2.0 GHz and wavelengths of 775 nm, 2,325 nm and 1.55 μm can be obtained for QKD use with input optical soliton and Gaussian beam respectively.
Sustainable Development in Indian Automotive Component Clusters
NASA Astrophysics Data System (ADS)
Bhaskaran, E.
2013-01-01
India is the world's second fastest growing auto market and boasts of the sixth largest automobile industry after China, the US, Germany, Japan and Brazil. The Indian auto component industry recorded its highest year-on-year growth of 34.2 % in 2010-2011, raking in revenue of US 39.9 billion; major contribution coming from exports at US five billion and fresh investment from the US at around US two billion. For inclusive growth and sustainable development most of the auto components manufacturers has adopted the cluster development approach. The objective is to study the technical efficiency (θ), peer weights (λ i ), input slacks (S-) and output slacks (S+) of four Auto Component Clusters (ACC) in India. The methodology adopted is using Data Envelopment Analysis of Input Oriented Banker Charnes Cooper Model by taking number of units and number of employments as inputs and sales and exports in crores as an outputs. The non-zero λ i 's represents the weights for efficient clusters. The S > 0 obtained for one ACC reveals the excess no. of units (S-) and employment (S-) and shortage in sales (S+) and exports (S+). However the variable returns to scale are increasing for three clusters, constant for one more cluster and with nil decrease. To conclude, for inclusive growth and sustainable development, the inefficient ACC should increase their turnover and exports, as decrease in no. of enterprises and employment is practically not possible. Moreover for sustainable development, the ACC should strengthen infrastructure interrelationships, technology interrelationships, procurement interrelationships, production interrelationships and marketing interrelationships to increase productivity and efficiency to compete in the world market.
Global biogeochemical implications of mercury discharges from rivers and sediment burial.
Amos, Helen M; Jacob, Daniel J; Kocman, David; Horowitz, Hannah M; Zhang, Yanxu; Dutkiewicz, Stephanie; Horvat, Milena; Corbitt, Elizabeth S; Krabbenhoft, David P; Sunderland, Elsie M
2014-08-19
Rivers are an important source of mercury (Hg) to marine ecosystems. Based on an analysis of compiled observations, we estimate global present-day Hg discharges from rivers to ocean margins are 27 ± 13 Mmol a(-1) (5500 ± 2700 Mg a(-1)), of which 28% reaches the open ocean and the rest is deposited to ocean margin sediments. Globally, the source of Hg to the open ocean from rivers amounts to 30% of atmospheric inputs. This is larger than previously estimated due to accounting for elevated concentrations in Asian rivers and variability in offshore transport across different types of estuaries. Riverine inputs of Hg to the North Atlantic have decreased several-fold since the 1970s while inputs to the North Pacific have increased. These trends have large effects on Hg concentrations at ocean margins but are too small in the open ocean to explain observed declines of seawater concentrations in the North Atlantic or increases in the North Pacific. Burial of Hg in ocean margin sediments represents a major sink in the global Hg biogeochemical cycle that has not been previously considered. We find that including this sink in a fully coupled global biogeochemical box model helps to balance the large anthropogenic release of Hg from commercial products recently added to global inventories. It also implies that legacy anthropogenic Hg can be removed from active environmental cycling on a faster time scale (centuries instead of millennia). Natural environmental Hg levels are lower than previously estimated, implying a relatively larger impact from human activity.
Simpson, James J.; Hufford, Gary L.; Daly, Christopher; Berg, Jared S.; Fleming, Michael D.
2005-01-01
Maps of mean monthly surface temperature and precipitation for Alaska and adjacent areas of Canada, produced by Oregon State University's Spatial Climate Analysis Service (SCAS) and the Alaska Geospatial Data Clearinghouse (AGDC), were analyzed. Because both sets of maps are generally available and in use by the community, there is a need to document differences between the processes and input data sets used by the two groups to produce their respective set of maps and to identify similarities and differences between the two sets of maps and possible reasons for the differences. These differences do not affect the observed large-scale patterns of seasonal and annual variability. Alaska is divided into interior and coastal zones, with consistent but different variability, separated by a transition region. The transition region has high interannual variability but low long-term mean variability. Both data sets support the four major ecosystems and ecosystem transition zone identified in our earlier work. Differences between the two sets of maps do occur, however, on the regional scale; they reflect differences in physiographic domains and in the treatment of these domains by the two groups (AGDC, SCAS). These differences also provide guidance for an improved observational network for Alaska. On the basis of validation with independent in situ data, we conclude that the data set produced by SCAS provides the best spatial coverage of Alaskan long-term mean monthly surface temperature and precipitation currently available. ?? The Arctic Institute of North America.
The impact of 14nm photomask variability and uncertainty on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-09-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.
Monitoring water quality in Northwest Atlantic coastal waters using dinoflagellate cysts
Nutrient pollution is a major environmental problem in many coastal waters around the US. Determining the total input of nutrients to estuaries is a challenge. One method to evaluate nutrient input is through nutrient loading models. Another method relies upon using indicators as...
NASA Astrophysics Data System (ADS)
Damé, Luc; Keckhut, Philippe; Hauchecorne, Alain; Meftah, Mustapha; Bekki, Slimane
2016-07-01
We present the SUITS/SWUSV microsatellite mission investigation: "Solar Ultraviolet Influence on Troposphere/Stratosphere, a Space Weather & Ultraviolet Solar Variability" mission. SUITS/SWUSV was developed to determine the origins of the Sun's activity, understand the flaring process (high energy flare characterization) and onset of CMEs (forecasting). Another major objective is to determine the dynamics and coupling of Earth's atmosphere and its response to solar variability (in particular UV) and terrestrial inputs. It therefore includes the prediction and detection of major eruptions and coronal mass ejections (Lyman-Alpha and Herzberg continuum imaging) the solar forcing on the climate through radiation and their interactions with the local stratosphere (UV spectral irradiance measures from 170 to 400 nm). The mission is proposed on a sun-synchronous polar orbit 18h-6h (for almost constant observing) and proposes a 7 instruments model payload of 65 kg - 65 W with: SUAVE (Solar Ultraviolet Advanced Variability Experiment), an optimized telescope for FUV (Lyman-Alpha) and MUV (200-220 nm Herzberg continuum) imaging (sources of variability); SOLSIM (Solar Spectral Irradiance Monitor), a spectrometer with 0.65 nm spectral resolution from 170 to 340 nm; SUPR (Solar Ultraviolet Passband Radiometers), with UV filter radiometers at Lyman-Alpha, Herzberg, MgII index, CN bandhead and UV bands coverage up to 400 nm; HEBS (High Energy Burst Spectrometers), a large energy coverage (a few tens of keV to a few hundreds of MeV) instrument to characterize large flares; EPT-HET (Electron-Proton Telescope - High Energy Telescope), measuring electrons, protons, and heavy ions over a large energy range; ERBO (Earth Radiative Budget and Ozone) NADIR oriented; and a vector magnetometer. Complete accommodation of the payload has been performed on a PROBA type platform very nicely. Heritage is important both for instruments (SODISM and PREMOS on PICARD, LYRA on PROBA-2, SOLSPEC on ISS,...) and platform (PROBA-2, PROBA-V,...), leading to high TRL levels (>7). SUITS/SWUSV was initially designed in view of the ESA/CAS AO for a Small Mission; it is now envisaged for a joint CNES/NASA opportunity with Europeans and Americans partners for a possible flight in 2021.
NASA Astrophysics Data System (ADS)
Kamynin, V. L.; Bukharova, T. I.
2017-01-01
We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.
Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.
Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva
2011-06-01
The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.
Poisson process stimulation of an excitable membrane cable model.
Goldfinger, M D
1986-01-01
The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505
Constructing general partial differential equations using polynomial and neural networks.
Zjavka, Ladislav; Pedrycz, Witold
2016-01-01
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fuzzy set approach to quality function deployment: An investigation
NASA Technical Reports Server (NTRS)
Masud, Abu S. M.
1992-01-01
The final report of the 1992 NASA/ASEE Summer Faculty Fellowship at the Space Exploration Initiative Office (SEIO) in Langley Research Center is presented. Quality Function Deployment (QFD) is a process, focused on facilitating the integration of the customer's voice in the design and development of a product or service. Various input, in the form of judgements and evaluations, are required during the QFD analyses. All the input variables in these analyses are treated as numeric variables. The purpose of the research was to investigate how QFD analyses can be performed when some or all of the input variables are treated as linguistic variables with values expressed as fuzzy numbers. The reason for this consideration is that human judgement, perception, and cognition are often ambiguous and are better represented as fuzzy numbers. Two approaches for using fuzzy sets in QFD have been proposed. In both cases, all the input variables are considered as linguistic variables with values indicated as linguistic expressions. These expressions are then converted to fuzzy numbers. The difference between the two approaches is due to how the QFD computations are performed with these fuzzy numbers. In Approach 1, the fuzzy numbers are first converted to their equivalent crisp scores and then the QFD computations are performed using these crisp scores. As a result, the output of this approach are crisp numbers, similar to those in traditional QFD. In Approach 2, all the QFD computations are performed with the fuzzy numbers and the output are fuzzy numbers also. Both the approaches have been explained with the help of illustrative examples of QFD application. Approach 2 has also been applied in a QFD application exercise in SEIO, involving a 'mini moon rover' design. The mini moon rover is a proposed tele-operated vehicle that will traverse and perform various tasks, including autonomous operations, on the moon surface. The output of the moon rover application exercise is a ranking of the rover functions so that a subset of these functions can be targeted for design improvement. The illustrative examples and the mini rover application exercise confirm that the proposed approaches for using fuzzy sets in QFD are viable. However, further research is needed to study the various issues involved and to verify/validate the methods proposed.
NASA Astrophysics Data System (ADS)
Felkner, John Sames
The scale and extent of global land use change is massive, and has potentially powerful effects on the global climate and global atmospheric composition (Turner & Meyer, 1994). Because of this tremendous change and impact, there is an urgent need for quantitative, empirical models of land use change, especially predictive models with an ability to capture the trajectories of change (Agarwal, Green, Grove, Evans, & Schweik, 2000; Lambin et al., 1999). For this research, a spatial statistical predictive model of land use change was created and run in two provinces of Thailand. The model utilized an extensive spatial database, and used a classification tree approach for explanatory model creation and future land use (Breiman, Friedman, Olshen, & Stone, 1984). Eight input variables were used, and the trees were run on a dependent variable of land use change measured from 1979 to 1989 using classified satellite imagery. The derived tree models were used to create probability of change surfaces, and these were then used to create predicted land cover maps for 1999. These predicted 1999 maps were compared with actual 1999 landcover derived from 1999 Landsat 7 imagery. The primary research hypothesis was that an explanatory model using both economic and environmental input variables would better predict future land use change than would either a model using only economic variables or a model using only environmental. Thus, the eight input variables included four economic and four environmental variables. The results indicated a very slight superiority of the full models to predict future agricultural change and future deforestation, but a slight superiority of the economic models to predict future built change. However, the margins of superiority were too small to be statistically significant. The resulting tree structures were used, however, to derive a series of principles or "rules" governing land use change in both provinces. The model was able to predict future land use, given a series of assumptions, with 90 percent overall accuracies. The model can be used in other developing or developed country locations for future land use prediction, determination of future threatened areas, or to derive "rules" or principles driving land use change.
Hülsheger, Ute R; Anderson, Neil; Salgado, Jesus F
2009-09-01
This article presents a meta-analysis of team-level antecedents of creativity and innovation in the workplace. Using a general input-process-output model, the authors examined 15 team-level variables researched in primary studies published over the last 30 years and their relation to creativity and innovation. An exhaustive search of the international innovation literature resulted in a final sample (k) of 104 independent studies. Results revealed that team process variables of support for innovation, vision, task orientation, and external communication displayed the strongest relationships with creativity and innovation (rhos between 0.4 and 0.5). Input variables (i.e., team composition and structure) showed weaker effect sizes. Moderator analyses confirmed that relationships differ substantially depending on measurement method (self-ratings vs. independent ratings of innovation) and measurement level (individual vs. team innovation). Team variables displayed considerably stronger relationships with self-report measures of innovation compared with independent ratings and objective criteria. Team process variables were more strongly related to creativity and innovation measured at the team than the individual level. Implications for future research and pragmatic ramifications for organizational practice are discussed in conclusion.
Fleury, Marie-Josée; Grenier, Guy; Bamvita, Jean-Marie; Chiocchio, François
2018-06-01
Using a structural analysis, this study examines the relationship between job satisfaction among 315 mental health professionals from the province of Quebec (Canada) and a wide range of variables related to provider characteristics, team characteristics, processes, and emergent states, and organizational culture. We used the Job Satisfaction Survey to assess job satisfaction. Our conceptual framework integrated numerous independent variables adapted from the input-mediator-output-input (IMOI) model and the Integrated Team Effectiveness Model (ITEM). The structural equation model predicted 47% of the variance of job satisfaction. Job satisfaction was associated with eight variables: strong team support, participation in the decision-making process, closer collaboration, fewer conflicts among team members, modest knowledge production (team processes), firm affective commitment, multifocal identification (emergent states) and belonging to the nursing profession (provider characteristics). Team climate had an impact on six job satisfaction variables (team support, knowledge production, conflicts, affective commitment, collaboration, and multifocal identification). Results show that team processes and emergent states were mediators between job satisfaction and team climate. To increase job satisfaction among professionals, health managers need to pursue strategies that foster a positive climate within mental health teams.
Matrix completion by deep matrix factorization.
Fan, Jicong; Cheng, Jieyu
2018-02-01
Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Influence of estuarine processes on spatiotemporal variation in bioavailable selenium
Stewart, Robin; Luoma, Samuel N.; Elrick, Kent A.; Carter, James L.; van der Wegen, Mick
2013-01-01
Dynamic processes (physical, chemical and biological) challenge our ability to quantify and manage the ecological risk of chemical contaminants in estuarine environments. Selenium (Se) bioavailability (defined by bioaccumulation), stable isotopes and molar carbon-tonitrogen ratios in the benthic clam Potamocorbula amurensis, an important food source for predators, were determined monthly for 17 yr in northern San Francisco Bay. Se concentrations in the clams ranged from a low of 2 to a high of 22 μg g-1 over space and time. Little of that variability was stochastic, however. Statistical analyses and preliminary hydrodynamic modeling showed that a constant mid-estuarine input of Se, which was dispersed up- and down-estuary by tidal currents, explained the general spatial patterns in accumulated Se among stations. Regression of Se bioavailability against river inflows suggested that processes driven by inflows were the primary driver of seasonal variability. River inflow also appeared to explain interannual variability but within the range of Se enrichment established at each station by source inputs. Evaluation of risks from Se contamination in estuaries requires the consideration of spatial and temporal variability on multiple scales and of the processes that drive that variability.
van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V
2017-03-21
Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robotics control using isolated word recognition of voice input
NASA Technical Reports Server (NTRS)
Weiner, J. M.
1977-01-01
A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.
Mendyk, Aleksander; Güres, Sinan; Szlęk, Jakub; Wiśniowska, Barbara; Kleinebudde, Peter
2015-01-01
The purpose of this work was to develop a mathematical model of the drug dissolution (Q) from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of Q versus extrudate diameter (d) and the time variable (t) and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations' parameters. Two inputs were found important for the drug dissolution: d and t. The extrudates length (L) was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of Q versus d and t resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs' black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies. PMID:26101544
Active Learning to Understand Infectious Disease Models and Improve Policy Making
Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-01-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387
Active learning to understand infectious disease models and improve policy making.
Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-04-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.
This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less
Mendyk, Aleksander; Güres, Sinan; Jachowicz, Renata; Szlęk, Jakub; Polak, Sebastian; Wiśniowska, Barbara; Kleinebudde, Peter
2015-01-01
The purpose of this work was to develop a mathematical model of the drug dissolution (Q) from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of Q versus extrudate diameter (d) and the time variable (t) and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations' parameters. Two inputs were found important for the drug dissolution: d and t. The extrudates length (L) was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of Q versus d and t resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs' black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies.
NASA Astrophysics Data System (ADS)
Eschenbach, Wolfram; Budziak, Dörte; Elbracht, Jörg; Höper, Heinrich; Krienen, Lisa; Kunkel, Ralf; Meyer, Knut; Well, Reinhard; Wendland, Frank
2018-06-01
Valid models for estimating nitrate emissions from agriculture to groundwater are an indispensable forecasting tool. A major challenge for model validation is the spatial and temporal inconsistency between data from groundwater monitoring points and modelled nitrate inputs into groundwater, and the fact that many existing groundwater monitoring wells cannot be used for validation. With the help of the N2/Ar-method, groundwater monitoring wells in areas with reduced groundwater can now be used for model validation. For this purpose, 484 groundwater monitoring wells were sampled in Lower Saxony. For the first time, modelled potential nitrate concentrations in groundwater recharge (from the DENUZ model) were compared with nitrate input concentrations, which were calculated using the N2/Ar method. The results show a good agreement between both methods for glacial outwash plains and moraine deposits. Although the nitrate degradation processes in groundwater and soil merge seamlessly in areas with a shallow groundwater table, the DENUZ model only calculates denitrification in the soil zone. The DENUZ model thus predicts 27% higher nitrate emissions into the groundwater than the N2/Ar method in such areas. To account for high temporal and spatial variability of nitrate emissions into groundwater, a large number of groundwater monitoring points must be investigated for model validation.
Dynamic Time Expansion and Compression Using Nonlinear Waveguides
Findikoglu, Alp T.; Hahn, Sangkoo F.; Jia, Quanxi
2004-06-22
Dynamic time expansion or compression of a small amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.
Dynamic time expansion and compression using nonlinear waveguides
Findikoglu, Alp T [Los Alamos, NM; Hahn, Sangkoo F [Los Alamos, NM; Jia, Quanxi [Los Alamos, NM
2004-06-22
Dynamic time expansion or compression of a small-amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small-amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
High frequency inductive lamp and power oscillator
Kirkpatrick, Douglas A.; Gitsevich, Aleksandr
2005-09-27
An oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and a tuning circuit connected to the input of the amplifier, wherein the tuning circuit is continuously variable and consists of solid state electrical components with no mechanically adjustable devices including a pair of diodes connected to each other at their respective cathodes with a control voltage connected at the junction of the diodes. Another oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and transmission lines connected to the input of the amplifier with an input pad and a perpendicular transmission line extending from the input pad and forming a leg of a resonant "T", and wherein the feedback network is coupled to the leg of the resonant "T".
A new conceptual model on the fate and controls of fresh and pyrolized plant litter decomposition
USDA-ARS?s Scientific Manuscript database
The leaching of dissolved organic matter (DOM) from fresh and pyrolyzed aboveground plant inputs to the soil is a major pathway by which decomposing aboveground plant material contributes to soil organic matter formation. Understanding how aboveground plant input chemical traits control the partiti...
Response and recovery of streams to an intense regional flooding event
NASA Astrophysics Data System (ADS)
Dethier, E.; Magilligan, F. J.; Renshaw, C. E.; Kantack, K. M.
2015-12-01
Determining the relative roles of frequent and infrequent events on landscape form and material transport has implications for understanding landscape development, and informs planning and infrastructure decisions. Flooding due to Tropical Storm Irene in 2011 provides a unique opportunity to examine the effects of a rare, major disturbance across a broad area (14,000 km2). Intense flooding caused variable but widespread channel and riparian reconfiguration, including 995 channel-adjacent mass-wasting events, collectively referred to here as landslides, that mostly occurred in glacial deposits. Of these, about half involved reactivation of existing scars. Landslides were generally small, ranging from 60 - 26,000 m2 in planform, and covered less than 0.01 % of land in the region, yet sediment input from landslides alone (131 mm/kyr when integrated over the study area) exceeded inferred local background erosion rates by 60 times. If Irene inputs are included in a thirty-year erosion record, the estimated erosion rate, 7.2 mm/kyr, aligns closely with long-term regional rates of 5-10 mm/kyr. Landslides also input trees to streams, increasing large wood influence on those reaches. Combined wood and sediment inputs contributed to channel changes downstream of landslides. In four years since Irene, terrestrial lidar and suspended sediment sampling has documented continued large wood and sediment input. Erosion occurred on each of seventeen monitored landslides during snowmelt, but is otherwise limited except during intense precipitation and/or flood events. Repeat lidar models have recorded erosion of up to 5,000 m3 on a single slide in one year, including as much as 4000 m3 during a single event. Tree fall on scarps during erosion events creates sediment traps at the base of landslides, contributing to an observed return to equilibrium slopes. Despite trapping, substantial sediment continues to enter streams. Ninety-five suspended sediment samples from forty sites show that landslides remain important sediment sources. Across a range of flows, 2014 - 2015 sediment flux for a given discharge is an order of magnitude higher than pre-Irene flux. Though landslide slope relaxation suggests incipient recovery from Irene, persistent rapid erosion of large wood and sediment indicates that recovery is still on-going.
Real-time flood forecasts & risk assessment using a possibility-theory based fuzzy neural network
NASA Astrophysics Data System (ADS)
Khan, U. T.
2016-12-01
Globally floods are one of the most devastating natural disasters and improved flood forecasting methods are essential for better flood protection in urban areas. Given the availability of high resolution real-time datasets for flood variables (e.g. streamflow and precipitation) in many urban areas, data-driven models have been effectively used to predict peak flow rates in river; however, the selection of input parameters for these types of models is often subjective. Additionally, the inherit uncertainty associated with data models along with errors in extreme event observations means that uncertainty quantification is essential. Addressing these concerns will enable improved flood forecasting methods and provide more accurate flood risk assessments. In this research, a new type of data-driven model, a quasi-real-time updating fuzzy neural network is developed to predict peak flow rates in urban riverine watersheds. A possibility-to-probability transformation is first used to convert observed data into fuzzy numbers. A possibility theory based training regime is them used to construct the fuzzy parameters and the outputs. A new entropy-based optimisation criterion is used to train the network. Two existing methods to select the optimum input parameters are modified to account for fuzzy number inputs, and compared. These methods are: Entropy-Wavelet-based Artificial Neural Network (EWANN) and Combined Neural Pathway Strength Analysis (CNPSA). Finally, an automated algorithm design to select the optimum structure of the neural network is implemented. The overall impact of each component of training this network is to replace the traditional ad hoc network configuration methods, with one based on objective criteria. Ten years of data from the Bow River in Calgary, Canada (including two major floods in 2005 and 2013) are used to calibrate and test the network. The EWANN method selected lagged peak flow as a candidate input, whereas the CNPSA method selected lagged precipitation and lagged mean daily flow as candidate inputs. Model performance metric show that the CNPSA method had higher performance (with an efficiency of 0.76). Model output was used to assess the risk of extreme peak flows for a given day using an inverse possibility-to-probability transformation.
ERIC Educational Resources Information Center
Lynn, Richard; Vanhanen, Tatu
2012-01-01
This paper summarizes the results of 244 correlates of national IQs that have been published from 2002 through 2012 and include educational attainment, cognitive output, educational input, per capita income, economic growth, other economic variables, crime, political institutions, health, fertility, sociological variables, and geographic and…
Crown fuel spatial variability and predictability of fire spread
Russell A. Parsons; Jeremy Sauer; Rodman R. Linn
2010-01-01
Fire behavior predictions, as well as measures of uncertainty in those predictions, are essential in operational and strategic fire management decisions. While it is becoming common practice to assess uncertainty in fire behavior predictions arising from variability in weather inputs, uncertainty arising from the fire models themselves is difficult to assess. This is...
ERIC Educational Resources Information Center
Qian, Manman; Chukharev-Hudilainen, Evgeny; Levis, John
2018-01-01
Many types of L2 phonological perception are often difficult to acquire without instruction. These difficulties with perception may also be related to intelligibility in production. Instruction on perception contrasts is more likely to be successful with the use of phonetically variable input made available through computer-assisted pronunciation…
RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,
This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)
Vegetation and environmental controls on soil respiration in a pinon-juniper woodland
Sandra A. White
2008-01-01
Soil respiration (RS) responds to changes in plant and microbial activity and environmental conditions. In arid ecosystems of the southwestern USA, soil moisture exhibits large fluctuations because annual and seasonal precipitation inputs are highly variable, with increased variability expected in the future. Patterns of soil moisture, and periodic severe drought, are...
Space sickness predictors suggest fluid shift involvement and possible countermeasures
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Moseley, E. C.; Charles, J. B.
1992-01-01
Preflight data from 64 first time Shuttle crew members were examined retrospectively to predict space sickness severity (NONE, MILD, MODERATE, or SEVERE) by discriminant analysis. From 9 input variables relating to fluid, electrolyte, and cardiovascular status, 8 variables were chosen by discriminant analysis that correctly predicted space sickness severity with 59 pct. success by one method of cross validation on the original sample and 67 pct. by another method. The 8 variables in order of their importance for predicting space sickness severity are sitting systolic blood pressure, serum uric acid, calculated blood volume, serum phosphate, urine osmolality, environmental temperature at the launch site, red cell count, and serum chloride. These results suggest the presence of predisposing physiologic factors to space sickness that implicate a fluid shift etiology. Addition of a 10th input variable, hours spent in the Weightless Environment Training Facility (WETF), improved the prediction of space sickness severity to 66 pct. success by the first method of cross validation on the original sample and to 71 pct. by the second method. The data suggest that WETF training may reduce space sickness severity.
Value of Construction Company and its Dependence on Significant Variables
NASA Astrophysics Data System (ADS)
Vítková, E.; Hromádka, V.; Ondrušková, E.
2017-10-01
The paper deals with the value of the construction company assessment respecting usable approaches and determinable variables. The reasons of the value of the construction company assessment are different, but the most important reasons are the sale or the purchase of the company, the liquidation of the company, the fusion of the company with another subject or the others. According the reason of the value assessment it is possible to determine theoretically different approaches for valuation, mainly it concerns about the yield method of valuation and the proprietary method of valuation. Both approaches are dependant of detailed input variables, which quality will influence the final assessment of the company´s value. The main objective of the paper is to suggest, according to the analysis, possible ways of input variables, mainly in the form of expected cash-flows or the profit, determination. The paper is focused mainly on methods of time series analysis, regression analysis and mathematical simulation utilization. As the output, the results of the analysis on the case study will be demonstrated.
Electrical resistivity tomography to delineate greenhouse soil variability
NASA Astrophysics Data System (ADS)
Rossi, R.; Amato, M.; Bitella, G.; Bochicchio, R.
2013-03-01
Appropriate management of soil spatial variability is an important tool for optimizing farming inputs, with the result of yield increase and reduction of the environmental impact in field crops. Under greenhouses, several factors such as non-uniform irrigation and localized soil compaction can severely affect yield and quality. Additionally, if soil spatial variability is not taken into account, yield deficiencies are often compensated by extra-volumes of crop inputs; as a result, over-irrigation and overfertilization in some parts of the field may occur. Technology for spatially sound management of greenhouse crops is therefore needed to increase yield and quality and to address sustainability. In this experiment, 2D-electrical resistivity tomography was used as an exploratory tool to characterize greenhouse soil variability and its relations to wild rocket yield. Soil resistivity well matched biomass variation (R2=0.70), and was linked to differences in soil bulk density (R2=0.90), and clay content (R2=0.77). Electrical resistivity tomography shows a great potential in horticulture where there is a growing demand of sustainability coupled with the necessity of stabilizing yield and product quality.