NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs
NASA Astrophysics Data System (ADS)
Jayathilake, D. I.; Smith, T. J.
2017-12-01
Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.
Nonstationary multivariate modeling of cerebral autoregulation during hypercapnia.
Kostoglou, Kyriaki; Debert, Chantel T; Poulin, Marc J; Mitsis, Georgios D
2014-05-01
We examined the time-varying characteristics of cerebral autoregulation and hemodynamics during a step hypercapnic stimulus by using recursively estimated multivariate (two-input) models which quantify the dynamic effects of mean arterial blood pressure (ABP) and end-tidal CO2 tension (PETCO2) on middle cerebral artery blood flow velocity (CBFV). Beat-to-beat values of ABP and CBFV, as well as breath-to-breath values of PETCO2 during baseline and sustained euoxic hypercapnia were obtained in 8 female subjects. The multiple-input, single-output models used were based on the Laguerre expansion technique, and their parameters were updated using recursive least squares with multiple forgetting factors. The results reveal the presence of nonstationarities that confirm previously reported effects of hypercapnia on autoregulation, i.e. a decrease in the MABP phase lead, and suggest that the incorporation of PETCO2 as an additional model input yields less time-varying estimates of dynamic pressure autoregulation obtained from single-input (ABP-CBFV) models. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Patel, Mainak
2018-01-15
The spiking of barrel regular-spiking (RS) cells is tuned for both whisker deflection direction and velocity. Velocity tuning arises due to thalamocortical (TC) synchrony (but not spike quantity) varying with deflection velocity, coupled with feedforward inhibition, while direction selectivity is not fully understood, though may be due partly to direction tuning of TC spiking. Data show that as deflection direction deviates from the preferred direction of an RS cell, excitatory input to the RS cell diminishes minimally, but temporally shifts to coincide with the time-lagged inhibitory input. This work constructs a realistic large-scale model of a barrel; model RS cells exhibit velocity and direction selectivity due to TC input dynamics, with the experimentally observed sharpening of direction tuning with decreasing velocity. The model puts forth the novel proposal that RS→RS synapses can naturally and simply account for the unexplained direction dependence of RS cell inputs - as deflection direction deviates from the preferred direction of an RS cell, and TC input declines, RS→RS synaptic transmission buffers the decline in total excitatory input and causes a shift in timing of the excitatory input peak from the peak in TC input to the delayed peak in RS input. The model also provides several experimentally testable predictions on the velocity dependence of RS cell inputs. This model is the first, to my knowledge, to study the interaction of direction and velocity and propose physiological mechanisms for the stimulus dependence in the timing and amplitude of RS cell inputs. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Combined non-parametric and parametric approach for identification of time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz
2018-03-01
Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.
Investigation of Effects of Varying Model Inputs on Mercury Deposition Estimates in the Southwest US
The Community Multiscale Air Quality (CMAQ) model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the continental United States (US). The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (E...
Conditional parametric models for storm sewer runoff
NASA Astrophysics Data System (ADS)
Jonsdottir, H.; Nielsen, H. Aa; Madsen, H.; Eliasson, J.; Palsson, O. P.; Nielsen, M. K.
2007-05-01
The method of conditional parametric modeling is introduced for flow prediction in a sewage system. It is a well-known fact that in hydrological modeling the response (runoff) to input (precipitation) varies depending on soil moisture and several other factors. Consequently, nonlinear input-output models are needed. The model formulation described in this paper is similar to the traditional linear models like final impulse response (FIR) and autoregressive exogenous (ARX) except that the parameters vary as a function of some external variables. The parameter variation is modeled by local lines, using kernels for local linear regression. As such, the method might be referred to as a nearest neighbor method. The results achieved in this study were compared to results from the conventional linear methods, FIR and ARX. The increase in the coefficient of determination is substantial. Furthermore, the new approach conserves the mass balance better. Hence this new approach looks promising for various hydrological models and analysis.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Sivaramakrishnan, Shyam; Rajamani, Rajesh; Johnson, Bruce D
2009-01-01
Respiratory CO(2) measurement (capnography) is an important diagnosis tool that lacks inexpensive and wearable sensors. This paper develops techniques to enable use of inexpensive but slow CO(2) sensors for breath-by-breath tracking of CO(2) concentration. This is achieved by mathematically modeling the dynamic response and using model-inversion techniques to predict input CO(2) concentration from the slow-varying output. Experiments are designed to identify model-dynamics and extract relevant model-parameters for a solidstate room monitoring CO(2) sensor. A second-order model that accounts for flow through the sensor's filter and casing is found to be accurate in describing the sensor's slow response. The resulting estimate is compared with a standard-of-care respiratory CO(2) analyzer and shown to effectively track variation in breath-by-breath CO(2) concentration. This methodology is potentially useful for measuring fast-varying inputs to any slow sensor.
Aircraft Fault Detection Using Real-Time Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2016-01-01
A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.
Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter
2015-02-01
Fine-scale temporal organization of cortical activity in the gamma range (∼25-80Hz) may play a significant role in information processing, for example by neural grouping ('binding') and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity.
Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter
2015-01-01
Fine-scale temporal organization of cortical activity in the gamma range (∼25–80Hz) may play a significant role in information processing, for example by neural grouping (‘binding’) and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity. PMID:25679780
Numerical modeling of rapidly varying flows using HEC-RAS and WSPG models.
Rao, Prasada; Hromadka, Theodore V
2016-01-01
The performance of two popular hydraulic models (HEC-RAS and WSPG) for modeling hydraulic jump in an open channel is investigated. The numerical solutions are compared with a new experimental data set obtained for varying channel bottom slopes and flow rates. Both the models satisfactorily predict the flow depths and location of the jump. The end results indicate that the numerical models output is sensitive to the value of chosen roughness coefficient. For this application, WSPG model is easier to implement with few input variables.
Models for forecasting energy use in the US farm sector
NASA Astrophysics Data System (ADS)
Christensen, L. R.
1981-07-01
Econometric models were developed and estimated for the purpose of forecasting electricity and petroleum demand in US agriculture. A structural approach is pursued which takes account of the fact that the quantity demanded of any one input is a decision made in conjunction with other input decisions. Three different functional forms of varying degrees of complexity are specified for the structural cost function, which describes the cost of production as a function of the level of output and factor prices. Demand for materials (all purchased inputs) is derived from these models. A separate model which break this demand up into demand for the four components of materials is used to produce forecasts of electricity and petroleum is a stepwise manner.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
Improved LTVMPC design for steering control of autonomous vehicle
NASA Astrophysics Data System (ADS)
Velhal, Shridhar; Thomas, Susy
2017-01-01
An improved linear time varying model predictive control for steering control of autonomous vehicle running on slippery road is presented. Control strategy is designed such that the vehicle will follow the predefined trajectory with highest possible entry speed. In linear time varying model predictive control, nonlinear vehicle model is successively linearized at each sampling instant. This linear time varying model is used to design MPC which will predict the future horizon. By incorporating predicted input horizon in each successive linearization the effectiveness of controller has been improved. The tracking performance using steering with front wheel and braking at four wheels are presented to illustrate the effectiveness of the proposed method.
Performance Optimizing Adaptive Control with Time-Varying Reference Model Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hashemi, Kelley E.
2017-01-01
This paper presents a new adaptive control approach that involves a performance optimization objective. The control synthesis involves the design of a performance optimizing adaptive controller from a subset of control inputs. The resulting effect of the performance optimizing adaptive controller is to modify the initial reference model into a time-varying reference model which satisfies the performance optimization requirement obtained from an optimal control problem. The time-varying reference model modification is accomplished by the real-time solutions of the time-varying Riccati and Sylvester equations coupled with the least-squares parameter estimation of the sensitivities of the performance metric. The effectiveness of the proposed method is demonstrated by an application of maneuver load alleviation control for a flexible aircraft.
Giugliano, Michele; La Camera, Giancarlo; Fusi, Stefano; Senn, Walter
2008-11-01
The response of a population of neurons to time-varying synaptic inputs can show a rich phenomenology, hardly predictable from the dynamical properties of the membrane's inherent time constants. For example, a network of neurons in a state of spontaneous activity can respond significantly more rapidly than each single neuron taken individually. Under the assumption that the statistics of the synaptic input is the same for a population of similarly behaving neurons (mean field approximation), it is possible to greatly simplify the study of neural circuits, both in the case in which the statistics of the input are stationary (reviewed in La Camera et al. in Biol Cybern, 2008) and in the case in which they are time varying and unevenly distributed over the dendritic tree. Here, we review theoretical and experimental results on the single-neuron properties that are relevant for the dynamical collective behavior of a population of neurons. We focus on the response of integrate-and-fire neurons and real cortical neurons to long-lasting, noisy, in vivo-like stationary inputs and show how the theory can predict the observed rhythmic activity of cultures of neurons. We then show how cortical neurons adapt on multiple time scales in response to input with stationary statistics in vitro. Next, we review how it is possible to study the general response properties of a neural circuit to time-varying inputs by estimating the response of single neurons to noisy sinusoidal currents. Finally, we address the dendrite-soma interactions in cortical neurons leading to gain modulation and spike bursts, and show how these effects can be captured by a two-compartment integrate-and-fire neuron. Most of the experimental results reviewed in this article have been successfully reproduced by simple integrate-and-fire model neurons.
NASA Astrophysics Data System (ADS)
Hale, R. L.; Grimm, N. B.; Vorosmarty, C. J.
2014-12-01
An ongoing challenge for society is to harness the benefits of phosphorus (P) while minimizing negative effects on downstream ecosystems. To meet this challenge we must understand the controls on the delivery of anthropogenic P from landscapes to downstream ecosystems. We used a model that incorporates P inputs to watersheds, hydrology, and infrastructure (sewers, waste-water treatment plants, and reservoirs) to reconstruct historic P yields for the northeastern U.S. from 1930 to 2002. At the regional scale, increases in P inputs were paralleled by increased fractional retention, thus P loading to the coast did not increase significantly. We found that temporal variation in regional P yield was correlated with P inputs. Spatial patterns of watershed P yields were best predicted by inputs, but the correlation between inputs and yields in space weakened over time, due to infrastructure development. Although the magnitude of infrastructure effect was small, its role changed over time and was important in creating spatial and temporal heterogeneity in input-yield relationships. We then conducted a hierarchical cluster analysis to identify a typology of anthropogenic P cycling, using data on P inputs (fertilizer, livestock feed, and human food), infrastructure (dams, wastewater treatment plants, sewers), and hydrology (runoff coefficient). We identified 6 key types of watersheds that varied significantly in climate, infrastructure, and the types and amounts of P inputs. Annual watershed P yields and retention varied significantly across watershed types. Although land cover varied significantly across typologies, clusters based on land cover alone did not explain P budget patterns, suggesting that this variable is insufficient to understand patterns of P cycling across large spatial scales. Furthermore, clusters varied over time as patterns of climate, P use, and infrastructure changed. Our results demonstrate that the drivers of P cycles are spatially and temporally heterogeneous, yet they also suggest that a relatively simple typology of watersheds can be useful for understanding regional P cycles and may help inform P management approaches.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
Deconvolution of noisy transient signals: a Kalman filtering application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Zicker, J.E.
The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.
Spectral analysis for nonstationary and nonlinear systems: a discrete-time-model-based approach.
He, Fei; Billings, Stephen A; Wei, Hua-Liang; Sarrigiannis, Ptolemaios G; Zhao, Yifan
2013-08-01
A new frequency-domain analysis framework for nonlinear time-varying systems is introduced based on parametric time-varying nonlinear autoregressive with exogenous input models. It is shown how the time-varying effects can be mapped to the generalized frequency response functions (FRFs) to track nonlinear features in frequency, such as intermodulation and energy transfer effects. A new mapping to the nonlinear output FRF is also introduced. A simulated example and the application to intracranial electroencephalogram data are used to illustrate the theoretical results.
ERIC Educational Resources Information Center
Walters, Christopher
2014-01-01
Studies of small-scale "model" early-childhood programs show that high-quality preschool can have transformative effects on human capital and economic outcomes. Evidence on the Head Start program is more mixed. Inputs and practices vary widely across Head Start centers, however, and little is known about variation in effectiveness within…
Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...
Identification of differences in health impact modelling of salt reduction
Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.
2017-01-01
We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636
NASA Technical Reports Server (NTRS)
Smialek, James L.
2002-01-01
A cyclic oxidation interfacial spalling model has been developed in Part 1. The governing equations have been simplified here by substituting a new algebraic expression for the series (Good-Smialek approximation). This produced a direct relationship between cyclic oxidation weight change and model input parameters. It also allowed for the mathematical derivation of various descriptive parameters as a function of the inputs. It is shown that the maximum in weight change varies directly with the parabolic rate constant and cycle duration and inversely with the spall fraction, all to the 1/2 power. The number of cycles to reach maximum and zero weight change vary inversely with the spall fraction, and the ratio of these cycles is exactly 1:3 for most oxides. By suitably normalizing the weight change and cycle number, it is shown that all cyclic oxidation weight change model curves can be represented by one universal expression for a given oxide scale.
Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs
NASA Astrophysics Data System (ADS)
Harvey, David Benjamin Paul
A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.
Burkitt, A N
2006-08-01
The integrate-and-fire neuron model describes the state of a neuron in terms of its membrane potential, which is determined by the synaptic inputs and the injected current that the neuron receives. When the membrane potential reaches a threshold, an action potential (spike) is generated. This review considers the model in which the synaptic input varies periodically and is described by an inhomogeneous Poisson process, with both current and conductance synapses. The focus is on the mathematical methods that allow the output spike distribution to be analyzed, including first passage time methods and the Fokker-Planck equation. Recent interest in the response of neurons to periodic input has in part arisen from the study of stochastic resonance, which is the noise-induced enhancement of the signal-to-noise ratio. Networks of integrate-and-fire neurons behave in a wide variety of ways and have been used to model a variety of neural, physiological, and psychological phenomena. The properties of the integrate-and-fire neuron model with synaptic input described as a temporally homogeneous Poisson process are reviewed in an accompanying paper (Burkitt in Biol Cybern, 2006).
Size invariance does not hold for connectionist models: dangers of using a toy model.
Yamaguchi, Makoto
2004-03-01
Connectionist models with backpropagation learning rule are known to have a serious problem called catastrophic interference or forgetting, although there have been several reports showing that the interference can be relatively mild with orthogonal inputs. The present study investigated the extent of interference using orthogonal inputs with varying network sizes. One would naturally assume that results obtained from small networks could be extrapolated for larger networks. Unexpectedly, the use of small networks was shown to worsen performance. This result has important implications for interpreting some data in the literature and cautions against the use of a toy model. Copyright 2004 Lippincott Williams & Wilkins
Analytically-derived sensitivities in one-dimensional models of solute transport in porous media
Knopman, D.S.
1987-01-01
Analytically-derived sensitivities are presented for parameters in one-dimensional models of solute transport in porous media. Sensitivities were derived by direct differentiation of closed form solutions for each of the odel, and by a time integral method for two of the models. Models are based on the advection-dispersion equation and include adsorption and first-order chemical decay. Boundary conditions considered are: a constant step input of solute, constant flux input of solute, and exponentially decaying input of solute at the upstream boundary. A zero flux is assumed at the downstream boundary. Initial conditions include a constant and spatially varying distribution of solute. One model simulates the mixing of solute in an observation well from individual layers in a multilayer aquifer system. Computer programs produce output files compatible with graphics software in which sensitivities are plotted as a function of either time or space. (USGS)
The accuracy of direct and indirect resource use and emissions of products as quantified in life cycle models depends in part upon the geographical and technological representativeness of the production models. Production conditions vary not just between nations, but also within ...
How sensitive are estimates of carbon fixation in agricultural models to input data?
2012-01-01
Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison. PMID:22296931
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
Relations among passive electrical properties of lumbar alpha-motoneurones of the cat.
Gustafsson, B; Pinter, M J
1984-01-01
The relations among passive membrane properties have been examined in cat motoneurones utilizing exclusively electrophysiological techniques. A significant relation was found to exist between the input resistance and the membrane time constant. The estimated electrotonic length showed no evident tendency to vary with input resistance but did show a tendency to decrease with increasing time constant. Detailed analysis of this trend suggests, however, that a variation in dendritic geometry is likely to exist among cat motoneurones, such that the dendritic trees of motoneurones projecting to fast-twitch muscle units are relatively more expansive than those of motoneurones projecting to slow-twitch units. Utilizing an expression derived from the Rall neurone model, the total capacitance of the equivalent cylinder corresponding to a motoneurone has been estimated. With the assumption of a constant and uniform specific capacitance of 1 mu F/cm2, the resulting values have been used as estimates of cell surface area. These estimates agree well with morphologically obtained measurements from cat motoneurones reported by others. Both membrane time constant (and thus likely specific membrane resistivity) and electrotonic length showed little tendency to vary with surface area. However, after-hyperpolarization (a.h.p.) duration showed some tendency to vary such that cells with brief a.h.p. duration were, on average, larger than those with longer a.h.p. durations. Apart from motoneurones with the lowest values, axonal conduction velocity was only weakly related to variations in estimated surface area. Input resistance and membrane time constant were found to vary systematically with the a.h.p. duration. Analysis suggested that the major part of the increase in input resistance with a.h.p. duration was related to an increase in membrane resistivity and a variation in dendritic geometry rather than to differences in surface area among the motoneurones. The possible effects of imperfect electrode seals have been considered. According to an analysis of a passive membrane model, soma leaks caused by impalement injury will result in underestimates of input resistance and time constant and over-estimates of electrotonic length and total capacitance. Assuming a non-injured resting potential of -80 mV, a comparison of membrane potentials predicted by various relative leaks (leak conductance/input conductance) with those actually observed suggests that the magnitude of these errors in the present material will not unduly affect the presented results.+4 PMID:6520792
Polynomic nonlinear dynamical systems - A residual sensitivity method for model reduction
NASA Technical Reports Server (NTRS)
Yurkovich, S.; Bugajski, D.; Sain, M.
1985-01-01
The motivation for using polynomic combinations of system states and inputs to model nonlinear dynamics systems is founded upon the classical theories of analysis and function representation. A feature of such representations is the need to make available all possible monomials in these variables, up to the degree specified, so as to provide for the description of widely varying functions within a broad class. For a particular application, however, certain monomials may be quite superfluous. This paper examines the possibility of removing monomials from the model in accordance with the level of sensitivity displayed by the residuals to their absence. Critical in these studies is the effect of system input excitation, and the effect of discarding monomial terms, upon the model parameter set. Therefore, model reduction is approached iteratively, with inputs redesigned at each iteration to ensure sufficient excitation of remaining monomials for parameter approximation. Examples are reported to illustrate the performance of such model reduction approaches.
Modeling rainfall conditions for shallow landsliding in Seattle, Washington
Godt, Jonathan W.; Schulz, William H.; Baum, Rex L.; Savage, William Z.
2008-01-01
We describe the results from an application of a distributed, transient infiltration–slope-stability model for an 18 km2 area of southwestern Seattle, Washington, USA. The model (TRIGRS) combines an infinite slope-stability calculation and an analytic, one-dimensional solution for pore-pressure diffusion in a soil layer of finite depth in response to time-varying rainfall. The transient solution for pore-pressure response can be superposed on any steady-state groundwater-flow field that is consistent with model assumptions. Applied over digital topography, the model computes a factor of safety for each grid cell at any time during a rainstorm. Input variables may vary from cell to cell, and the rainfall rate can vary in both space and time. For Seattle, topographic slope derived from an airborne laser swath mapping (ALSM)–based 3 m digital elevation model (DEM), maps of soil and water-table depths derived from geotechnical borings, and hourly rainfall intensities were used as model inputs. Material strength and hydraulic properties used in the model were determined from field and laboratory measurements, and a tension-saturated initial condition was assumed. Results are given in terms of a destabilizing intensity and duration of rainfall, and they were evaluated by comparing the locations of 212 historical landslides with the area mapped as potentially unstable. Because the equations of groundwater flow are explicitly solved with respect to time, the results from TRIGRS simulations can be portrayed quantitatively to assess the potential landslide hazard based on rainfall conditions.
NASA Astrophysics Data System (ADS)
Georgiou, K.; Abramoff, R. Z.; Harte, J.; Riley, W. J.; Torn, M. S.
2016-12-01
As global temperatures and atmospheric CO2 concentrations continue to increase, soil microbial activity and decomposition of soil organic matter (SOM) are expected to follow suit, potentially limiting soil carbon storage. Traditional global- and ecosystem-scale models simulate SOM decomposition using linear kinetics, which are inherently unable to reproduce carbon-concentration feedbacks, such as priming of native SOM at elevated CO2 concentrations. Recent studies using nonlinear microbial models of SOM decomposition seek to capture these interactions, and several groups are currently integrating these microbial models into Earth System Models (ESMs). However, despite their widespread ability to exhibit nonlinear responses, these models vary tremendously in complexity and, consequently, dynamics. In this study, we explore, both analytically and numerically, the emergent oscillatory behavior and insensitivity of SOM stocks to carbon inputs that have been deemed `unrealistic' in recent microbial models. We discuss the sources of instability in four models of varying complexity, by sequentially reducing complexity of a detailed model that includes microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We also present an alternative representation of microbial turnover that limits population sizes and, thus, reduces oscillations. We compare these models to several long-term carbon input manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that traditional linear and nonlinear models cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures, and that modifying microbial turnover results in more realistic predictions. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in ESMs.
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Description and availability of the SMARTS spectral model for photovoltaic applications
NASA Astrophysics Data System (ADS)
Myers, Daryl R.; Gueymard, Christian A.
2004-11-01
Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.
On the Freshwater Sensitivity of the Arctic-Atlantic Thermohaline Circulation
NASA Astrophysics Data System (ADS)
Lambert, E.; Eldevik, T.; Haugan, P.
2016-02-01
The North Atlantic thermohaline circulation (THC) carries heat and salt toward the Arctic. This circulation is generally believed to be inhibited by northern freshwater input as indicated by the `box-model' of Stommel (1961). The inferred freshwater-sensitivity of the THC, however, varies considerably between studies, both quantitatively and qualitatively. The northernmost branch of the Atlantic THC, which forms a double estuarine circulation in the Arctic Mediterranean, is one example where both strengthening and weakening of the circulation may occur due to increased freshwater input. We have accordingly built on Stommel's original concept to accomodate a THC similar to that in the Arctic Mediterranean. This model consists of three idealized basins, or boxes, connected by two coupled branches of circulation - the double estuary. The net transport of these two branches represents the extension of the Gulf Stream toward the Arctic. Its sensitivity to a change in freshwater forcing depends largely on the distribution of freshwater over the two northern basins. Varying this distribution opens a spectrum of qualitative behaviours ranging from Stommel's original freshwater-inhibited overturning circulation to a freshwater-facilitated estuarine circulation. Between these limiting cases, a Hopf and a cusp bifurcation divide the spectrum into three qualitative regions. In the first region, the circulation behaves similarly to Stommel's circulation, and sufficient freshwater input can induce an abrupt transition into a reversed flow; in the second, a similar transition can be found, although it does not reverse the circulation; in the third, no transition can occur and the circulation is generally facilitated by the northern freshwater input. Overall, the northern THC appears more stable than what would be inferred based on Stommel's model; it requires a larger amount and more localized freshwater input to `collapse' it, and a double estuary circulation is less prone to flow reversal.
Scott Painter; Ethan Coon; Cathy Wilson; Dylan Harp; Adam Atchley
2016-04-21
This Modeling Archive is in support of an NGEE Arctic publication currently in review [4/2016]. The Advanced Terrestrial Simulator (ATS) was used to simulate thermal hydrological conditions across varied environmental conditions for an ensemble of 1D models of Arctic permafrost. The thickness of organic soil is varied from 2 to 40cm, snow depth is varied from approximately 0 to 1.2 meters, water table depth was varied from -51cm below the soil surface to 31 cm above the soil surface. A total of 15,960 ensemble members are included. Data produced includes the third and fourth simulation year: active layer thickness, time of deepest thaw depth, temperature of the unfrozen soil, and unfrozen liquid saturation, for each ensemble member. Input files used to run the ensemble are also included.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan
2017-01-01
This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.
USDA-ARS?s Scientific Manuscript database
Accurate determination of predicted environmental concentrations (PECs) is a continuing and often elusive goal of pesticide risk assessment. PECs are typically derived using simulation models that depend on laboratory generated data for key input parameters (t1/2, Koc, etc.). Model flexibility in ...
USDA-ARS?s Scientific Manuscript database
Accurate determination of predicted environmental concentrations (PECs) is a continuing and often elusive goal of pesticide risk assessment. PECs are typically derived using simulation models that depend on laboratory generated data for key input parameters (t1/2, Koc, etc.). Model flexibility in ev...
USDA-ARS?s Scientific Manuscript database
Representation of precipitation is one of the most difficult aspects of modeling post-fire runoff and erosion and also one of the most sensitive input parameters to rainfall-runoff models. The impact of post-fire convective rainstorms, especially in semi-arid watersheds, depends on the overlap betwe...
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Heinmets, F; Leary, R H
1991-06-01
A model system (1) was established to analyze purine and pyrimidine metabolism. This system has been expanded to include macrosimulation of DNA synthesis and the study of its regulation by terminal deoxynucleoside triphosphates (dNTPs) via a complex set of interactions. Computer experiments reveal that our model exhibits adequate and reasonable sensitivity in terms of dNTP pool levels and rates of DNA synthesis when inputs to the system are varied. These simulation experiments reveal that in order to achieve maximum DNA synthesis (in terms of purine metabolism), a proper balance is required in guanine and adenine input into this metabolic system. Excessive inputs will become inhibitory to DNA synthesis. In addition, studies are carried out on rates of DNA synthesis when various parameters are changed quantitatively. The current system is formulated by 110 differential equations.
Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M
2017-10-01
Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Fesen, C. G.; Roble, R. G.
1991-02-01
The NCAR thermosphere-ionosphere general circulation model (TIGCM) was used to simulate incoherent scatter radar observations of the lower thermosphere tides during the first Lower Thermosphere Coupling Study (LTCS) campaign, September 21-26, 1987. The TIGCM utilized time-varying histories of the model input fields obtained from the World Data Center for the LTCS period. The model inputs included solar flux, total hemispheric power, solar wind data from which the cross-polar-cap potential was derived, and geomagnetic Kp index. Calculations were made for the semidiurnal ion temperatures and horizontal neutral winds at locations representative of Arecibo, Millstone Hill, and Sondrestrom. Tidal inputs to the TIGCM lower boundary were obtained from the middle atmosphere model of Forbes and Vial (1989). The TIGCM tidal structures are in fair general agreement with the observations. The amplitudes tended to be better simulated than the phases, and the mid- and high-latitude locations are simulated better than the low-latitude thermosphere. The model simulations were used to investigate the daily variability of the tides due to the geomagnetic activity occurring during this period.
Attention enhances contrast appearance via increased input baseline of neural responses
Cutrone, Elizabeth K.; Heeger, David J.; Carrasco, Marisa
2014-01-01
Covert spatial attention increases the perceived contrast of stimuli at attended locations, presumably via enhancement of visual neural responses. However, the relation between perceived contrast and the underlying neural responses has not been characterized. In this study, we systematically varied stimulus contrast, using a two-alternative, forced-choice comparison task to probe the effect of attention on appearance across the contrast range. We modeled performance in the task as a function of underlying neural contrast-response functions. Fitting this model to the observed data revealed that an increased input baseline in the neural responses accounted for the enhancement of apparent contrast with spatial attention. PMID:25549920
European nitrogen policies, nitrate in rivers and the use of the INCA model
NASA Astrophysics Data System (ADS)
Skeffington, R.
This paper is concerned with nitrogen inputs to European catchments, how they are likely to change in future, and the implications for the INCA model. National N budgets show that the fifteen countries currently in the European Union (the EU-15 countries) probably have positive N balances - that is, N inputs exceed outputs. The major sources are atmospheric deposition, fertilisers and animal feed, the relative importance of which varies between countries. The magnitude of the fluxes which determine the transport and retention of N in catchments is also very variable in both space and time. The most important of these fluxes are parameterised directly or indirectly in the INCA Model, though it is doubtful whether the present version of the model is flexible enough to encompass short-term (daily) variations in inputs or longer-term (decadal) changes in soil parameters. As an aid to predicting future changes in deposition, international legislation relating to atmospheric N inputs and nitrate in rivers is reviewed briefly. Atmospheric N deposition and fertiliser use are likely to decrease over the next 10 years, but probably not sufficiently to balance national N budgets.
Analyzing Power Supply and Demand on the ISS
NASA Technical Reports Server (NTRS)
Thomas, Justin; Pham, Tho; Halyard, Raymond; Conwell, Steve
2006-01-01
Station Power and Energy Evaluation Determiner (SPEED) is a Java application program for analyzing the supply and demand aspects of the electrical power system of the International Space Station (ISS). SPEED can be executed on any computer that supports version 1.4 or a subsequent version of the Java Runtime Environment. SPEED includes an analysis module, denoted the Simplified Battery Solar Array Model, which is a simplified engineering model of the ISS primary power system. This simplified model makes it possible to perform analyses quickly. SPEED also includes a user-friendly graphical-interface module, an input file system, a parameter-configuration module, an analysis-configuration-management subsystem, and an output subsystem. SPEED responds to input information on trajectory, shadowing, attitude, and pointing in either a state-of-charge mode or a power-availability mode. In the state-of-charge mode, SPEED calculates battery state-of-charge profiles, given a time-varying power-load profile. In the power-availability mode, SPEED determines the time-varying total available solar array and/or battery power output, given a minimum allowable battery state of charge.
NASA Technical Reports Server (NTRS)
Saha, Dipanjan; Lewandowski, Edward J.
2013-01-01
The steady-state, nearly sinusoidal behavior of the components in a free-piston Stirling engine allows for visualization of the forces in the system using phasor diagrams. Based on Newton's second law, F = ma, any phasor diagrams modeling a given component in a system should close if all of the acting forces have been considered. Since the Advanced Stirling Radioisotope Generator (ASRG), currently being developed for future NASA deep space missions, is made up of such nearly sinusoidally oscillating components, its phasor diagrams would also be expected to close. A graphical user interface (GUI) has been written in MATLAB (MathWorks), which takes user input data, passes it to Sage (Gedeon Associates), a one-dimensional thermodynamic modeling program used to model the Stirling convertor, runs Sage, and then automatically plots the phasor diagrams. Using this software tool, the effect of varying different Sage inputs on the phasor diagrams was determined. The parameters varied were piston amplitude, hot-end temperature, cold-end temperature, operating frequency, and displacer spring constant. These phasor diagrams offer useful insight into convertor operation and performance.
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Aspart, Florian; Ladenbauer, Josef; Obermayer, Klaus
2016-11-01
Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial "ball-and-stick" (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology with (and without) the influence of weak external electric fields.
Obermayer, Klaus
2016-01-01
Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial “ball-and-stick” (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology with (and without) the influence of weak external electric fields. PMID:27893786
Karimi, Hamid Reza; Gao, Huijun
2008-07-01
A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.
Use of Rare Earth Elements in investigations of aeolian processes
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Spatial variability in denitrification rates in an Oregon tidal salt marsh
Modeling denitrification (DeN) is particularly challenging in tidal systems, which play a vital role in buffering adjacent coastal waters from nitrogen inputs. These systems are hydrologically and biogeochemically complex, varying on fine temporal and spatial scales. As part of a...
Markets, Herding and Response to External Information.
Carro, Adrián; Toral, Raúl; San Miguel, Maxi
2015-01-01
We focus on the influence of external sources of information upon financial markets. In particular, we develop a stochastic agent-based market model characterized by a certain herding behavior as well as allowing traders to be influenced by an external dynamic signal of information. This signal can be interpreted as a time-varying advertising, public perception or rumor, in favor or against one of two possible trading behaviors, thus breaking the symmetry of the system and acting as a continuously varying exogenous shock. As an illustration, we use a well-known German Indicator of Economic Sentiment as information input and compare our results with Germany's leading stock market index, the DAX, in order to calibrate some of the model parameters. We study the conditions for the ensemble of agents to more accurately follow the information input signal. The response of the system to the external information is maximal for an intermediate range of values of a market parameter, suggesting the existence of three different market regimes: amplification, precise assimilation and undervaluation of incoming information.
Detecting regional lung properties using audio transfer functions of the respiratory system.
Mulligan, K; Adler, A; Goubran, R
2009-01-01
In this study, a novel instrument has been developed for measuring changes in the distribution of lung fluid the respiratory system. The instrument consists of a speaker that inputs a 0-4kHz White Gaussian Noise (WGN) signal into a patient's mouth and an array of 4 electronic stethoscopes, linked via a fully adjustable harness, used to recover signals on the chest surface. The software system for processing the data utilizes the principles of adaptive filtering in order to obtain a transfer function that represents the input-output relationship for the signal as the volume of fluid in the lungs is varied. A chest phantom model was constructed to simulate the behavior of fluid related diseases within the lungs through the injection of varying volumes of water. Tests from the phantom model were compared to healthy subjects. Results show the instrument can obtain similar transfer functions and sound propagation delays between both human and phantom chests.
Austin, Caitlin M.; Stoy, William; Su, Peter; Harber, Marie C.; Bardill, J. Patrick; Hammer, Brian K.; Forest, Craig R.
2014-01-01
Biosensors exploiting communication within genetically engineered bacteria are becoming increasingly important for monitoring environmental changes. Currently, there are a variety of mathematical models for understanding and predicting how genetically engineered bacteria respond to molecular stimuli in these environments, but as sensors have miniaturized towards microfluidics and are subjected to complex time-varying inputs, the shortcomings of these models have become apparent. The effects of microfluidic environments such as low oxygen concentration, increased biofilm encapsulation, diffusion limited molecular distribution, and higher population densities strongly affect rate constants for gene expression not accounted for in previous models. We report a mathematical model that accurately predicts the biological response of the autoinducer N-acyl homoserine lactone-mediated green fluorescent protein expression in reporter bacteria in microfluidic environments by accommodating these rate constants. This generalized mass action model considers a chain of biomolecular events from input autoinducer chemical to fluorescent protein expression through a series of six chemical species. We have validated this model against experimental data from our own apparatus as well as prior published experimental results. Results indicate accurate prediction of dynamics (e.g., 14% peak time error from a pulse input) and with reduced mean-squared error with pulse or step inputs for a range of concentrations (10 μM–30 μM). This model can help advance the design of genetically engineered bacteria sensors and molecular communication devices. PMID:25379076
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Lovvorn, James R.; Jacob, Ute; North, Christopher A.; Kolts, Jason M.; Grebmeier, Jacqueline M.; Cooper, Lee W.; Cui, Xuehua
2015-03-01
Network models can help generate testable predictions and more accurate projections of food web responses to environmental change. Such models depend on predator-prey interactions throughout the network. When a predator currently consumes all of its prey's production, the prey's biomass may change substantially with loss of the predator or invasion by others. Conversely, if production of deposit-feeding prey is limited by organic matter inputs, system response may be predictable from models of primary production. For sea floor communities of shallow Arctic seas, increased temperature could lead to invasion or loss of predators, while reduced sea ice or change in wind-driven currents could alter organic matter inputs. Based on field data and models for three different sectors of the northern Bering Sea, we found a number of cases where all of a prey's production was consumed but the taxa involved varied among sectors. These differences appeared not to result from numerical responses of predators to abundance of preferred prey. Rather, they appeared driven by stochastic variations in relative biomass among taxa, due largely to abiotic conditions that affect colonization and early post-larval survival. Oscillatory tendencies of top-down versus bottom-up interactions may augment these variations. Required inputs of settling microalgae exceeded existing estimates of annual primary production by 50%; thus, assessing limits to bottom-up control depends on better corrections of satellite estimates to account for production throughout the water column. Our results suggest that in this Arctic system, stochastic abiotic conditions outweigh deterministic species interactions in food web responses to a varying environment.
Nanosecond Plasma Enhanced H2/O2/N2 Premixed Flat Flames
2014-01-01
Simulations are conducted with a one-dimensional, multi-scale, pulsed -discharge model with detailed plasma-combustion kinetics to develop additional insight... model framework. The reduced electric field, E/N, during each pulse varies inversely with number density. A significant portion of the input energy is...dimensional numerical model [4, 12] capable of resolving electric field transients over nanosecond timescales (during each discharge pulse ) and radical
Development and weighting of a life cycle assessment screening model
NASA Astrophysics Data System (ADS)
Bates, Wayne E.; O'Shaughnessy, James; Johnson, Sharon A.; Sisson, Richard
2004-02-01
Nearly all life cycle assessment tools available today are high priced, comprehensive and quantitative models requiring a significant amount of data collection and data input. In addition, most of the available software packages require a great deal of training time to learn how to operate the model software. Even after this time investment, results are not guaranteed because of the number of estimations and assumptions often necessary to run the model. As a result, product development, design teams and environmental specialists need a simplified tool that will allow for the qualitative evaluation and "screening" of various design options. This paper presents the development and design of a generic, qualitative life cycle screening model and demonstrates its applicability and ease of use. The model uses qualitative environmental, health and safety factors, based on site or product-specific issues, to sensitize the overall results for a given set of conditions. The paper also evaluates the impact of different population input ranking values on model output. The final analysis is based on site or product-specific variables. The user can then evaluate various design changes and the apparent impact or improvement on the environment, health and safety, compliance cost and overall corporate liability. Major input parameters can be varied, and factors such as materials use, pollution prevention, waste minimization, worker safety, product life, environmental impacts, return of investment, and recycle are evaluated. The flexibility of the model format will be discussed in order to demonstrate the applicability and usefulness within nearly any industry sector. Finally, an example using audience input value scores will be compared to other population input results.
Spike Train Auto-Structure Impacts Post-Synaptic Firing and Timing-Based Plasticity
Scheller, Bertram; Castellano, Marta; Vicente, Raul; Pipa, Gordon
2011-01-01
Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification. PMID:22203800
Synaptic control of the shape of the motoneuron pool input-output function
Heckman, Charles J.
2017-01-01
Although motoneurons have often been considered to be fairly linear transducers of synaptic input, recent evidence suggests that strong persistent inward currents (PICs) in motoneurons allow neuromodulatory and inhibitory synaptic inputs to induce large nonlinearities in the relation between the level of excitatory input and motor output. To try to estimate the possible extent of this nonlinearity, we developed a pool of model motoneurons designed to replicate the characteristics of motoneuron input-output properties measured in medial gastrocnemius motoneurons in the decerebrate cat with voltage-clamp and current-clamp techniques. We drove the model pool with a range of synaptic inputs consisting of various mixtures of excitation, inhibition, and neuromodulation. We then looked at the relation between excitatory drive and total pool output. Our results revealed that the PICs not only enhance gain but also induce a strong nonlinearity in the relation between the average firing rate of the motoneuron pool and the level of excitatory input. The relation between the total simulated force output and input was somewhat more linear because of higher force outputs in later-recruited units. We also found that the nonlinearity can be increased by increasing neuromodulatory input and/or balanced inhibitory input and minimized by a reciprocal, push-pull pattern of inhibition. We consider the possibility that a flexible input-output function may allow motor output to be tuned to match the widely varying demands of the normal motor repertoire. NEW & NOTEWORTHY Motoneuron activity is generally considered to reflect the level of excitatory drive. However, the activation of voltage-dependent intrinsic conductances can distort the relation between excitatory drive and the total output of a pool of motoneurons. Using a pool of realistic motoneuron models, we show that pool output can be a highly nonlinear function of synaptic input but linearity can be achieved through adjusting the time course of excitatory and inhibitory synaptic inputs. PMID:28053245
Generating Performance Models for Irregular Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav
2017-05-30
Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less
NASA Astrophysics Data System (ADS)
Dumedah, Gift; Walker, Jeffrey P.
2017-03-01
The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.
Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari
2016-01-01
Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures.
Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari
2016-01-01
Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960–2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60–70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures. PMID:26901763
Noise facilitates transcriptional control under dynamic inputs.
Kellogg, Ryan A; Tay, Savaş
2015-01-29
Cells must respond sensitively to time-varying inputs in complex signaling environments. To understand how signaling networks process dynamic inputs into gene expression outputs and the role of noise in cellular information processing, we studied the immune pathway NF-κB under periodic cytokine inputs using microfluidic single-cell measurements and stochastic modeling. We find that NF-κB dynamics in fibroblasts synchronize with oscillating TNF signal and become entrained, leading to significantly increased NF-κB oscillation amplitude and mRNA output compared to non-entrained response. Simulations show that intrinsic biochemical noise in individual cells improves NF-κB oscillation and entrainment, whereas cell-to-cell variability in NF-κB natural frequency creates population robustness, together enabling entrainment over a wider range of dynamic inputs. This wide range is confirmed by experiments where entrained cells were measured under all input periods. These results indicate that synergy between oscillation and noise allows cells to achieve efficient gene expression in dynamically changing signaling environments. Copyright © 2015 Elsevier Inc. All rights reserved.
Jaffe, B.E.; Rubin, D.M.
1996-01-01
The time-dependent response of sediment suspension to flow velocity was explored by modeling field measurements collected in the surf zone during a large storm. Linear and nonlinear models were created and tested using flow velocity as input and suspended-sediment concentration as output. A sequence of past velocities (velocity history), as well as velocity from the same instant as the suspended-sediment concentration, was used as input; this velocity history length was allowed to vary. The models also allowed for a lag between input (instantaneous velocity or end of velocity sequence) and output (suspended-sediment concentration). Predictions of concentration from instantaneous velocity or instantaneous velocity raised to a power (up to 8) using linear models were poor (correlation coefficients between predicted and observed concentrations were less than 0.10). Allowing a lag between velocity and concentration improved linear models (correlation coefficient of 0.30), with optimum lag time increasing with elevation above the seabed (from 1.5 s at 13 cm to 8.5 s at 60 cm). These lags are largely due to the time for an observed flow event to effect the bed and mix sediment upward. Using a velocity history further improved linear models (correlation coefficient of 0.43). The best linear model used 12.5 s of velocity history (approximately one wave period) to predict concentration. Nonlinear models gave better predictions than linear models, and, as with linear models, nonlinear models using a velocity history performed better than models using only instantaneous velocity as input. Including a lag time between the velocity and concentration also improved the predictions. The best model (correlation coefficient of 0.58) used 3 s (approximately a quarter wave period) of the cross-shore velocity squared, starting at 4.5 s before the observed concentration, to predict concentration. Using a velocity history increases the performance of the models by specifying a more complete description of the dynamical forcing of the flow (including accelerations and wave phase and shape) responsible for sediment suspension. Incorporating such a velocity history and a lag time into the formulation of the forcing for time-dependent models for sediment suspension in the surf zone will greatly increase our ability to predict suspended-sediment transport.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
USDA-ARS?s Scientific Manuscript database
This paper aims to investigate how surface soil moisture data assimilation affects each hydrologic process and how spatially varying inputs affect the potential capability of surface soil moisture assimilation at the watershed scale. The Ensemble Kalman Filter (EnKF) is coupled with a watershed scal...
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Numerical simulations of flares on M dwarf stars. I - Hydrodynamics and coronal X-ray emission
NASA Technical Reports Server (NTRS)
Cheng, Chung-Chieh; Pallavicini, Roberto
1991-01-01
Flare-loop models are utilized to simulate the time evolution and physical characteristics of stellar X-ray flares by varying the values of flare-energy input and loop parameters. The hydrodynamic evolution is studied in terms of changes in the parameters of the mass, energy, and momentum equations within an area bounded by the chromosphere and the corona. The zone supports a magnetically confined loop for which processes are described including the expansion of heated coronal gas, chromospheric evaporation, and plasma compression at loop footpoints. The intensities, time profiles, and average coronal temperatures of X-ray flares are derived from the simulations and compared to observational evidence. Because the amount of evaporated material does not vary linearly with flare-energy input, large loops are required to produce the energy measured from stellar flares.
Maria Theresa I. Cabaraban; Charles N. Kroll; Satoshi Hirabayashi; David J. Nowak
2013-01-01
A distributed adaptation of i-Tree Eco was used to simulate dry deposition in an urban area. This investigation focused on the effects of varying temperature, LAI, and NO2 concentration inputs on estimated NO2 dry deposition to trees in Baltimore, MD. A coupled modeling system is described, wherein WRF provided temperature...
Schuwirth, Nele; Reichert, Peter
2013-02-01
For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.
A Parametric Approach to Numerical Modeling of TKR Contact Forces
Lundberg, Hannah J.; Foucher, Kharma C.; Wimmer, Markus A.
2009-01-01
In vivo knee contact forces are difficult to determine using numerical methods because there are more unknown forces than equilibrium equations available. We developed parametric methods for computing contact forces across the knee joint during the stance phase of level walking. Three-dimensional contact forces were calculated at two points of contact between the tibia and the femur, one on the lateral aspect of the tibial plateau, and one on the medial side. Muscle activations were parametrically varied over their physiologic range resulting in a solution space of contact forces. The obtained solution space was reasonably small and the resulting force pattern compared well to a previous model from the literature for kinematics and external kinetics from the same patient. Peak forces of the parametric model and the previous model were similar for the first half of the stance phase, but differed for the second half. The previous model did not take into account the transverse external moment about the knee and could not calculate muscle activation levels. Ultimately, the parametric model will result in more accurate contact force inputs for total knee simulators, as current inputs are not generally based on kinematics and kinetics inputs from TKR patients. PMID:19155015
Slow feature analysis: unsupervised learning of invariances.
Wiskott, Laurenz; Sejnowski, Terrence J
2002-04-01
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Rimner, A; Hayes, S
Purpose: To use dual-input tracer kinetic modeling of the lung for mapping spatial heterogeneity of various kinetic parameters in malignant MPM Methods: Six MPM patients received DCE-MRI as part of their radiation therapy simulation scan. 5 patients had the epitheloid subtype of MPM, while one was biphasic. A 3D fast-field echo sequence with TR/TE/Flip angle of 3.62ms/1.69ms/15° was used for DCE-MRI acquisition. The scan was collected for 5 minutes with a temporal resolution of 5-9 seconds depending on the spatial extent of the tumor. A principal component analysis-based groupwise deformable registration was used to co-register all the DCE-MRI series formore » motion compensation. All the images were analyzed using five different dual-input tracer kinetic models implemented in analog continuous-time formalism: the Tofts-Kety (TK), extended TK (ETK), two compartment exchange (2CX), adiabatic approximation to the tissue homogeneity (AATH), and distributed parameter (DP) models. The following parameters were computed for each model: total blood flow (BF), pulmonary flow fraction (γ), pulmonary blood flow (BF-pa), systemic blood flow (BF-a), blood volume (BV), mean transit time (MTT), permeability-surface area product (PS), fractional interstitial volume (vi), extraction fraction (E), volume transfer constant (Ktrans) and efflux rate constant (kep). Results: Although the majority of patients had epitheloid histologies, kinetic parameter values varied across different models. One patient showed a higher total BF value in all models among the epitheloid histologies, although the γ value was varying among these different models. In one tumor with a large area of necrosis, the TK and ETK models showed higher E, Ktrans, and kep values and lower interstitial volume as compared to AATH and DP and 2CX models. Kinetic parameters such as BF-pa, BF-a, PS, Ktrans values were higher in surviving group compared to non-surviving group across most models. Conclusion: Dual-input tracer kinetic modeling is feasible in determining micro-vascular characteristics of MPM. This project was supported from Cycle for Survival and MSK Imaging and radiation science (IMRAS) grants.« less
NASA Technical Reports Server (NTRS)
Saha, Dipanjan; Lewandowski, Edward J.
2013-01-01
The steady state, nearly sinusoidal behavior of the components in a Free Piston Stirling Engine allows for visualization of the forces in the system using phasor diagrams. Based on Newton's second law, F=ma, any phasor diagrams modeling a given component in a system should close if all of the acting forces have been considered. Since the Advanced Stirling Radioisotope Generator (ASRG), currently being developed for future NASA deep space missions, is made up of such nearly sinusoidally oscillating components, its phasor diagrams would also be expected to close. A graphical user interface (GUI) has been written in MATLAB by taking user input data, passing it to Sage, a 1-D thermodynamic modeling program used to model the Stirling convertor, running Sage and then automatically plotting the phasor diagrams. Using this software tool, the effect of varying different Sage inputs on the phasor diagrams was determined. The parameters varied were piston amplitude, hot end temperature, cold end temperature, operating frequency, and displacer spring constant. By using these phasor diagrams, better insight can be gained as to why the convertor operates the way that it does.
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
Visual Predictive Check in Models with Time-Varying Input Function.
Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio
2015-11-01
The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.
Programmable electronic synthesized capacitance
NASA Technical Reports Server (NTRS)
Kleinberg, Leonard L. (Inventor)
1987-01-01
A predetermined and variable synthesized capacitance which may be incorporated into the resonant portion of an electronic oscillator for the purpose of tuning the oscillator comprises a programmable operational amplifier circuit. The operational amplifier circuit has its output connected to its inverting input, in a follower configuration, by a network which is low impedance at the operational frequency of the circuit. The output of the operational amplifier is also connected to the noninverting input by a capacitor. The noninverting input appears as a synthesized capacitance which may be varied with a variation in gain-bandwidth product of the operational amplifier circuit. The gain-bandwidth product may, in turn, be varied with a variation in input set current with a digital to analog converter whose output is varied with a command word. The output impedance of the circuit may also be varied by the output set current. This circuit may provide very small ranges in oscillator frequency with relatively large control voltages unaffected by noise.
Nonlinear control of linear parameter varying systems with applications to hypersonic vehicles
NASA Astrophysics Data System (ADS)
Wilcox, Zachary Donald
The focus of this dissertation is to design a controller for linear parameter varying (LPV) systems, apply it specifically to air-breathing hypersonic vehicles, and examine the interplay between control performance and the structural dynamics design. Specifically a Lyapunov-based continuous robust controller is developed that yields exponential tracking of a reference model, despite the presence of bounded, nonvanishing disturbances. The hypersonic vehicle has time varying parameters, specifically temperature profiles, and its dynamics can be reduced to an LPV system with additive disturbances. Since the HSV can be modeled as an LPV system the proposed control design is directly applicable. The control performance is directly examined through simulations. A wide variety of applications exist that can be effectively modeled as LPV systems. In particular, flight systems have historically been modeled as LPV systems and associated control tools have been applied such as gain-scheduling, linear matrix inequalities (LMIs), linear fractional transformations (LFT), and mu-types. However, as the type of flight environments and trajectories become more demanding, the traditional LPV controllers may no longer be sufficient. In particular, hypersonic flight vehicles (HSVs) present an inherently difficult problem because of the nonlinear aerothermoelastic coupling effects in the dynamics. HSV flight conditions produce temperature variations that can alter both the structural dynamics and flight dynamics. Starting with the full nonlinear dynamics, the aerothermoelastic effects are modeled by a temperature dependent, parameter varying state-space representation with added disturbances. The model includes an uncertain parameter varying state matrix, an uncertain parameter varying non-square (column deficient) input matrix, and an additive bounded disturbance. In this dissertation, a robust dynamic controller is formulated for a uncertain and disturbed LPV system. The developed controller is then applied to a HSV model, and a Lyapunov analysis is used to prove global exponential reference model tracking in the presence of uncertainty in the state and input matrices and exogenous disturbances. Simulations with a spectrum of gains and temperature profiles on the full nonlinear dynamic model of the HSV is used to illustrate the performance and robustness of the developed controller. In addition, this work considers how the performance of the developed controller varies over a wide variety of control gains and temperature profiles and are optimized with respect to different performance metrics. Specifically, various temperature profile models and related nonlinear temperature dependent disturbances are used to characterize the relative control performance and effort for each model. Examining such metrics as a function of temperature provides a potential inroad to examine the interplay between structural/thermal protection design and control development and has application for future HSV design and control implementation.
We use a simple nitrogen budget model to analyze concentrations of total nitrogen (TN) in estuaries for which both nitrogen inputs and water residence time are correlated with freshwater inflow rates. While the nitrogen concentration of an estuary varies linearly with TN loading ...
How one models the input and output data for a life cycle assessment can greatly affect the results. Although much attention has been paid to allocation methodology by researchers in the field, general guidance is still lacking. Current research investigated the effect of applyin...
Estimating fire behavior with FIRECAST: user's manual
Jack D. Cohen
1986-01-01
FIRECAST is a computer program that estimates fire behavior in terms of six fire parameters. Required inputs vary depending on the outputs desired by the fire manager. Fuel model options available to users are these: Northern Forest Fire Laboratory (NFFL), National Fire Danger Rating System (NFDRS), and southern California brushland (SCAL). The program has been...
Markets, Herding and Response to External Information
Carro, Adrián; Toral, Raúl; San Miguel, Maxi
2015-01-01
We focus on the influence of external sources of information upon financial markets. In particular, we develop a stochastic agent-based market model characterized by a certain herding behavior as well as allowing traders to be influenced by an external dynamic signal of information. This signal can be interpreted as a time-varying advertising, public perception or rumor, in favor or against one of two possible trading behaviors, thus breaking the symmetry of the system and acting as a continuously varying exogenous shock. As an illustration, we use a well-known German Indicator of Economic Sentiment as information input and compare our results with Germany’s leading stock market index, the DAX, in order to calibrate some of the model parameters. We study the conditions for the ensemble of agents to more accurately follow the information input signal. The response of the system to the external information is maximal for an intermediate range of values of a market parameter, suggesting the existence of three different market regimes: amplification, precise assimilation and undervaluation of incoming information. PMID:26204451
Modeling polar cap F-region patches using time varying convection
NASA Technical Reports Server (NTRS)
Sojka, J. J.; Bowline, M. D.; Schunk, R. W.; Decker, D. T.; Valladares, C. E.; Sheehan, R.; Anderson, D. N.; Heelis, R. A.
1993-01-01
Creation of polar cap F-region patches are simulated for the first time using two independent physical models of the high latitude ionosphere. The patch formation is achieved by temporally varying the magnetospheric electric field (ionospheric convection) input to the models. The imposed convection variations are comparable to changes in the convection that result from changes in the B(y) IMF component for southward IMF. Solar maximum-winter simulations show that simple changes in the convection pattern lead to significant changes in the polar cap plasma structuring. Specifically, in winter, as enhanced dayside plasma convects into the polar cap to form the classic tongue-of-ionization the convection changes produce density structures that are indistinguishable from the observed patches.
Modeling Streamflow and Water Temperature in the North Santiam and Santiam Rivers, Oregon, 2001-02
Sullivan, Annett B.; Roundsk, Stewart A.
2004-01-01
To support the development of a total maximum daily load (TMDL) for water temperature in the Willamette Basin, the laterally averaged, two-dimensional model CE-QUAL-W2 was used to construct a water temperature and streamflow model of the Santiam and North Santiam Rivers. The rivers were simulated from downstream of Detroit and Big Cliff dams to the confluence with the Willamette River. Inputs to the model included bathymetric data, flow and temperature from dam releases, tributary flow and temperature, and meteorologic data. The model was calibrated for the period July 1 through November 21, 2001, and confirmed with data from April 1 through October 31, 2002. Flow calibration made use of data from two streamflow gages and travel-time and river-width data. Temperature calibration used data from 16 temperature monitoring locations in 2001 and 5 locations in 2002. A sensitivity analysis was completed by independently varying input parameters, including point-source flow, air temperature, flow and water temperature from dam releases, and riparian shading. Scenario analyses considered hypothetical river conditions without anthropogenic heat inputs, with restored riparian vegetation, with minimum streamflow from the dams, and with a more-natural seasonal water temperature regime from dam releases.
The 2014 United States National Seismic Hazard Model
Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.
2015-01-01
New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.
Investigation of a Macromechanical Approach to Analyzing Triaxially-Braided Polymer Composites
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Blinzler, Brina J.; Binienda, Wieslaw K.
2010-01-01
A macro level finite element-based model has been developed to simulate the mechanical and impact response of triaxially-braided polymer matrix composites. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. The material stiffness and strength values required for the constitutive model are determined based on coupon level tests on the braided composite. Simulations of quasi-static coupon tests of a representative braided composite are conducted. Varying the strength values that are input to the material model is found to have a significant influence on the effective material response predicted by the finite element analysis, sometimes in ways that at first glance appear non-intuitive. A parametric study involving the input strength parameters provides guidance on how the analysis model can be improved.
Feedforward Inhibition Allows Input Summation to Vary in Recurrent Cortical Networks
2018-01-01
Abstract Brain computations depend on how neurons transform inputs to spike outputs. Here, to understand input-output transformations in cortical networks, we recorded spiking responses from visual cortex (V1) of awake mice of either sex while pairing sensory stimuli with optogenetic perturbation of excitatory and parvalbumin-positive inhibitory neurons. We found that V1 neurons’ average responses were primarily additive (linear). We used a recurrent cortical network model to determine whether these data, as well as past observations of nonlinearity, could be described by a common circuit architecture. Simulations showed that cortical input-output transformations can be changed from linear to sublinear with moderate (∼20%) strengthening of connections between inhibitory neurons, but this change away from linear scaling depends on the presence of feedforward inhibition. Simulating a variety of recurrent connection strengths showed that, compared with when input arrives only to excitatory neurons, networks produce a wider range of output spiking responses in the presence of feedforward inhibition. PMID:29682603
NASA Astrophysics Data System (ADS)
Koo, Min-Sung; Choi, Ho-Lim
2018-01-01
In this paper, we consider a control problem for a class of uncertain nonlinear systems in which there exists an unknown time-varying delay in the input and lower triangular nonlinearities. Usually, in the existing results, input delays have been coupled with feedforward (or upper triangular) nonlinearities; in other words, the combination of lower triangular nonlinearities and input delay has been rare. Motivated by the existing controller for input-delayed chain of integrators with nonlinearity, we show that the control of input-delayed nonlinear systems with two particular types of lower triangular nonlinearities can be done. As a control solution, we propose a newly designed feedback controller whose main features are its dynamic gain and non-predictor approach. Three examples are given for illustration.
NASA Astrophysics Data System (ADS)
Cao, Jinde; Wang, Yanyan
2010-05-01
In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
A Flight Dynamics Model for a Small Glider in Ambient Winds
NASA Technical Reports Server (NTRS)
Beeler, Scott C.; Moerder, Daniel D.; Cox, David E.
2003-01-01
In this paper we describe the equations of motion developed for a point-mass zero-thrust (gliding) aircraft model operating in an environment of spatially varying atmospheric winds. The wind effects are included as an integral part of the flight dynamics equations, and the model is controlled through the three aerodynamic control angles. Formulas for the aerodynamic coefficients for this model are constructed to include the effects of several different aspects contributing to the aerodynamic performance of the vehicle. Characteristic parameter values of the model are compared with those found in a different set of small glider simulations. We execute a set of example problems which solve the glider dynamics equations to find the aircraft trajectory given specified control inputs. The ambient wind conditions and glider characteristics are varied to compare the simulation results under these different circumstances.
A Flight Dynamics Model for a Small Glider in Ambient Winds
NASA Technical Reports Server (NTRS)
Beeler, Scott C.; Moerder, Daniel D.; Cox, David E.
2003-01-01
In this paper we describe the equations of motion developed for a point-mass zero-thrust (gliding) aircraft model operating in an environment of spatially varying atmospheric winds. The wind effects are included as an integral part of the flight dynamics equations, and the model is controlled through the three aerodynamic control angles. Formulas for the aerodynamic coefficients for this model are constructed to include the effects of several different aspects contributing to the aerodynamic performance of the vehicle. Characteristic parameter values of the model are compared with those found in a different set of small glider simulations. We execute a set of example problems which solve the glider dynamics equations to find aircraft trajectory given specified control inputs. The ambient wind conditions and glider characteristics are varied to compare the simulation results under these different circumstances.
Dasgupta, Purnendu K
2008-12-05
Resolution of overlapped chromatographic peaks is generally accomplished by modeling the peaks as Gaussian or modified Gaussian functions. It is possible, even preferable, to use actual single analyte input responses for this purpose and a nonlinear least squares minimization routine such as that provided by Microsoft Excel Solver can then provide the resolution. In practice, the quality of the results obtained varies greatly due to small shifts in retention time. I show here that such deconvolution can be considerably improved if one or more of the response arrays are iteratively shifted in time.
Neural control of muscle force: indications from a simulation model
Luca, Carlo J. De
2013-01-01
We developed a model to investigate the influence of the muscle force twitch on the simulated firing behavior of motoneurons and muscle force production during voluntary isometric contractions. The input consists of an excitatory signal common to all the motor units in the pool of a muscle, consistent with the “common drive” property. Motor units respond with a hierarchically structured firing behavior wherein at any time and force, firing rates are inversely proportional to recruitment threshold, as described by the “onion skin” property. Time- and force-dependent changes in muscle force production are introduced by varying the motor unit force twitches as a function of time or by varying the number of active motor units. A force feedback adjusts the input excitation, maintaining the simulated force at a target level. The simulations replicate motor unit behavior characteristics similar to those reported in previous empirical studies of sustained contractions: 1) the initial decrease and subsequent increase of firing rates, 2) the derecruitment and recruitment of motor units throughout sustained contractions, and 3) the continual increase in the force fluctuation caused by the progressive recruitment of larger motor units. The model cautions the use of motor unit behavior at recruitment and derecruitment without consideration of changes in the muscle force generation capacity. It describes an alternative mechanism for the reserve capacity of motor units to generate extraordinary force. It supports the hypothesis that the control of motoneurons remains invariant during force-varying and sustained isometric contractions. PMID:23236008
Signaling mechanisms underlying the robustness and tunability of the plant immune network
Kim, Yungil; Tsuda, Kenichi; Igarashi, Daisuke; Hillmer, Rachel A.; Sakakibara, Hitoshi; Myers, Chad L.; Katagiri, Fumiaki
2014-01-01
Summary How does robust and tunable behavior emerge in a complex biological network? We sought to understand this for the signaling network controlling pattern-triggered immunity (PTI) in Arabidopsis. A dynamic network model containing four major signaling sectors, the jasmonate, ethylene, PAD4, and salicylate sectors, which together explain up to 80% of the PTI level, was built using data for dynamic sector activities and PTI levels under exhaustive combinatorial sector perturbations. Our regularized multiple regression model had a high level of predictive power and captured known and unexpected signal flows in the network. The sole inhibitory sector in the model, the ethylene sector, was central to the network robustness via its inhibition of the jasmonate sector. The model's multiple input sites linked specific signal input patterns varying in strength and timing to different network response patterns, indicating a mechanism enabling tunability. PMID:24439900
Preprocessor and postprocessor computer programs for a radial-flow finite-element model
Pucci, A.A.; Pope, D.A.
1987-01-01
Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)
NASA Technical Reports Server (NTRS)
Sullivan, Michael J.
2005-01-01
This thesis develops a state estimation algorithm for the Centrifuge Rotor (CR) system where only relative measurements are available with limited knowledge of both rotor imbalance disturbances and International Space Station (ISS) thruster disturbances. A Kalman filter is applied to a plant model augmented with sinusoidal disturbance states used to model both the effect of the rotor imbalance and the 155 thrusters on the CR relative motion measurement. The sinusoidal disturbance states compensate for the lack of the availability of plant inputs for use in the Kalman filter. Testing confirms that complete disturbance modeling is necessary to ensure reliable estimation. Further testing goes on to show that increased estimator operational bandwidth can be achieved through the expansion of the disturbance model within the filter dynamics. In addition, Monte Carlo analysis shows the varying levels of robustness against defined plant/filter uncertainty variations.
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
Modelling of polymer photodegradation for solar cell modules
NASA Technical Reports Server (NTRS)
Somersall, A. C.; Guillet, J. E.
1981-01-01
A computer program developed to model and calculate by numerical integration the varying concentrations of chemical species formed during photooxidation of a polymeric material over time, using as input data a choice set of elementary reactions, corresponding rate constants and a convenient set of starting conditions is evaluated. Attempts were made to validate the proposed mechanism by experimentally monitoring the photooxidation products of small liquid alkane which are useful starting models for ethylene segments of polymers like EVA. The model system proved in appropriate for the intended purposes. Another validation model is recommended.
Rispoli, Matthew; Holt, Janet K.
2017-01-01
Purpose This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech. Method Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth. Results Instruction increased parent use of full is declaratives (ηp 2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity. Conclusions These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system. PMID:28892819
Similarity-transformed dyson mapping and SDG-interacting boson hamiltonian
NASA Astrophysics Data System (ADS)
Navrátil, P.; Dobeš, J.
1991-10-01
The sdg-interacting boson hamiltonian is constructed from the fermion shell-model input. The seniority boson mapping as given by the similarity-transformed Dyson boson mapping is used. The s, d, and g collective boson amplitudes are determined consistently from the mapped hamiltonian. Influence of the starting shell-model parameters is discussed. Calculations for the Sm isotopic chain and for the 148Sm, 150Nd, and 196Pt nuclei are presented. Calculated energy levels as well as E2 and E4 properties agree rather well with experimental ones. To obtain such agreement, the input shell-model parameters cannot be fixed at a constant set for several nuclei but have to be somewhat varied, especially in the deformed region. Possible reasons for this variation are discussed. Effects of the explicit g-boson consideration are shown.
Cost-effective computational method for radiation heat transfer in semi-crystalline polymers
NASA Astrophysics Data System (ADS)
Boztepe, Sinan; Gilblas, Rémi; de Almeida, Olivier; Le Maoult, Yannick; Schmidt, Fabrice
2018-05-01
This paper introduces a cost-effective numerical model for infrared (IR) heating of semi-crystalline polymers. For the numerical and experimental studies presented here semi-crystalline polyethylene (PE) was used. The optical properties of PE were experimentally analyzed under varying temperature and the obtained results were used as input in the numerical studies. The model was built based on optically homogeneous medium assumption whereas the strong variation in the thermo-optical properties of semi-crystalline PE under heating was taken into account. Thus, the change in the amount radiative energy absorbed by the PE medium was introduced in the model induced by its temperature-dependent thermo-optical properties. The computational study was carried out considering an iterative closed-loop computation, where the absorbed radiation was computed using an in-house developed radiation heat transfer algorithm -RAYHEAT- and the computed results was transferred into the commercial software -COMSOL Multiphysics- for solving transient heat transfer problem to predict temperature field. The predicted temperature field was used to iterate the thermo-optical properties of PE that varies under heating. In order to analyze the accuracy of the numerical model experimental analyses were carried out performing IR-thermographic measurements during the heating of the PE plate. The applicability of the model in terms of computational cost, number of numerical input and accuracy was highlighted.
NASA Astrophysics Data System (ADS)
Beecham, J. A.; Engelhard, G. H.
2007-10-01
An ecological economic model of trawling is presented to demonstrate the effect of trawling location choice strategy on net input (rate of economic gain of fish caught per time spent less costs). Fishing location choice is considered to be a dynamic process whereby trawlers chose from among a repertoire of plastic strategies that they modify if their gains fall below a fixed proportion of the mean gains of the fleet as a whole. The distribution of fishing across different areas of a fishery follows an approximate ideal free distribution (IFD) with varying noise due to uncertainty. The least-productive areas are not utilised because initial net input never reaches the mean yield of better areas subject to competitive exploitation. In cases, where there is a weak temporal autocorrelation between fish stocks in a specific location, a plastic strategy of local translocation between trawls mixed with longer-range translocation increases realised input. The trawler can change its translocation strategy in the light of information about recent trawling success compared to its long-term average but, in contrast to predictions of the Marginal Value Theorem (MVT) model, does not know for certain what it will find by moving, so may need to sample new patches. The combination of the two types of translocation mirrored beam-trawling strategies used by the Dutch fleet and the resultant distribution of trawling effort is confirmed by analysis of historical effort distribution of British otter trawling fleets in the North Sea. Fisheries exploitation represents an area where dynamic agent-based adaptive models may be a better representation of the economic dynamics of a fleet than classically inspired optimisation models.
Regan, R.S.; Schaffranek, R.W.; Baltzer, R.A.
1996-01-01
A system of functional utilities and computer routines, collectively identified as the Time-Dependent Data System CI DDS), has been developed and documented by the U.S. Geological Survey. The TDDS is designed for processing time sequences of discrete, fixed-interval, time-varying geophysical data--in particular, hydrologic data. Such data include various, dependent variables and related parameters typically needed as input for execution of one-, two-, and three-dimensional hydrodynamic/transport and associated water-quality simulation models. Such data can also include time sequences of results generated by numerical simulation models. Specifically, TDDS provides the functional capabilities to process, store, retrieve, and compile data in a Time-Dependent Data Base (TDDB) in response to interactive user commands or pre-programmed directives. Thus, the TDDS, in conjunction with a companion TDDB, provides a ready means for processing, preparation, and assembly of time sequences of data for input to models; collection, categorization, and storage of simulation results from models; and intercomparison of field data and simulation results. The TDDS can be used to edit and verify prototype, time-dependent data to affirm that selected sequences of data are accurate, contiguous, and appropriate for numerical simulation modeling. It can be used to prepare time-varying data in a variety of formats, such as tabular lists, sequential files, arrays, graphical displays, as well as line-printer plots of single or multiparameter data sets. The TDDB is organized and maintained as a direct-access data base by the TDDS, thus providing simple, yet efficient, data management and access. A single, easily used, program interface that provides all access to and from a particular TDDB is available for use directly within models, other user-provided programs, and other data systems. This interface, together with each major functional utility of the TDDS, is described and documented in this report.
Structure, functioning, and cumulative stressors of Mediterranean deep-sea ecosystems
NASA Astrophysics Data System (ADS)
Tecchio, Samuele; Coll, Marta; Sardà, Francisco
2015-06-01
Environmental stressors, such as climate fluctuations, and anthropogenic stressors, such as fishing, are of major concern for the management of deep-sea ecosystems. Deep-water habitats are limited by primary productivity and are mainly dependent on the vertical input of organic matter from the surface. Global change over the latest decades is imparting variations in primary productivity levels across oceans, and thus it has an impact on the amount of organic matter landing on the deep seafloor. In addition, anthropogenic impacts are now reaching the deep ocean. The Mediterranean Sea, the largest enclosed basin on the planet, is not an exception. However, ecosystem-level studies of response to varying food input and anthropogenic stressors on deep-sea ecosystems are still scant. We present here a comparative ecological network analysis of three food webs of the deep Mediterranean Sea, with contrasting trophic structure. After modelling the flows of these food webs with the Ecopath with Ecosim approach, we compared indicators of network structure and functioning. We then developed temporal dynamic simulations varying the organic matter input to evaluate its potential effect. Results show that, following the west-to-east gradient in the Mediterranean Sea of marine snow input, organic matter recycling increases, net production decreases to negative values and trophic organisation is overall reduced. The levels of food-web activity followed the gradient of organic matter availability at the seafloor, confirming that deep-water ecosystems directly depend on marine snow and are therefore influenced by variations of energy input, such as climate-driven changes. In addition, simulations of varying marine snow arrival at the seafloor, combined with the hypothesis of a possible fishery expansion on the lower continental slope in the western basin, evidence that the trawling fishery may pose an impact which could be an order of magnitude stronger than a climate-driven reduction of marine snow.
Field mapping for heat capacity mapping determinations: Ground support for airborne thermal surveys
NASA Technical Reports Server (NTRS)
Lyon, R. J. P.
1976-01-01
Thermal models independently derived by Watson, Outcalt, and Rosema were compared using similar input data and found to yield very different results. Each model has a varying degree of sensitivity to any specified parameter. Data collected at Pisgah Crater-Lavic Lake was re-examined to indicate serious discrepancy in results for thermal inertia from Jet Lab Propulsion Laboratory calculations, when made using the same orginal data sets.
NASA Astrophysics Data System (ADS)
Yan, Weijin; Mayorga, Emilio; Li, Xinyan; Seitzinger, Sybil P.; Bouwman, A. F.
2010-12-01
In this paper, we estimate the inputs of nitrogen (N) and exports of dissolved inorganic nitrogen (DIN) from the Changjiang River to the estuary for the period 1970-2003, by using the global NEWS-DIN model. Modeled DIN yields range from 260 kg N km-2 yr-1 in 1970 to 895 kg N km-2 yr-1 in 2003, with an increasing trend. The study demonstrated a varied contribution of different N inputs to river DIN yields during the period 1970-2003. Chemical fertilizer and manure together contributed about half of the river DIN yields, while atmospheric N deposition contributed an average of 21% of DIN yields in the period 1970-2003. Biological N fixation contributed 40% of DIN yields in 1970, but substantially decreased to 13% in 2003. Point sewage N input also showed a decreasing trend in contribution to DIN yields, with an average of 8% over the whole period. We also discuss possible future trajectories of DIN export based on the Global NEWS implementation of the Millennium Ecosystem Assessment scenarios. Our result indicates that anthropogenically enhanced N inputs dominate and will continue to dominate river DIN yields under changing human pressures in the basin. Therefore, nitrogen pollution is and will continue to be a great challenge to China.
Baum, Rex L.; Savage, William Z.; Godt, Jonathan W.
2008-01-01
The Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model (TRIGRS) is a Fortran program designed for modeling the timing and distribution of shallow, rainfall-induced landslides. The program computes transient pore-pressure changes, and attendant changes in the factor of safety, due to rainfall infiltration. The program models rainfall infiltration, resulting from storms that have durations ranging from hours to a few days, using analytical solutions for partial differential equations that represent one-dimensional, vertical flow in isotropic, homogeneous materials for either saturated or unsaturated conditions. Use of step-function series allows the program to represent variable rainfall input, and a simple runoff routing model allows the user to divert excess water from impervious areas onto more permeable downslope areas. The TRIGRS program uses a simple infinite-slope model to compute factor of safety on a cell-by-cell basis. An approximate formula for effective stress in unsaturated materials aids computation of the factor of safety in unsaturated soils. Horizontal heterogeneity is accounted for by allowing material properties, rainfall, and other input values to vary from cell to cell. This command-line program is used in conjunction with geographic information system (GIS) software to prepare input grids and visualize model results.
GUI for Computational Simulation of a Propellant Mixer
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Richter, Hanz; Barbieri, Enrique; Granger, Jamie
2005-01-01
Control Panel is a computer program that generates a graphical user interface (GUI) for computational simulation of a rocket-test-stand propellant mixer in which gaseous hydrogen (GH2) is injected into flowing liquid hydrogen (LH2) to obtain a combined flow having desired thermodynamic properties. The GUI is used in conjunction with software that models the mixer as a system having three inputs (the positions of the GH2 and LH2 inlet valves and an outlet valve) and three outputs (the pressure inside the mixer and the outlet flow temperature and flow rate). The user can specify valve characteristics and thermodynamic properties of the input fluids via userfriendly dialog boxes. The user can enter temporally varying input values or temporally varying desired output values. The GUI provides (1) a set-point calculator function for determining fixed valve positions that yield desired output values and (2) simulation functions that predict the response of the mixer to variations in the properties of the LH2 and GH2 and manual- or feedback-control variations in valve positions. The GUI enables scheduling of a sequence of operations that includes switching from manual to feedback control when a certain event occurs.
Development of a hydraulic model of the human systemic circulation
NASA Technical Reports Server (NTRS)
Sharp, M. K.; Dharmalingham, R. K.
1999-01-01
Physical and numeric models of the human circulation are constructed for a number of objectives, including studies and training in physiologic control, interpretation of clinical observations, and testing of prosthetic cardiovascular devices. For many of these purposes it is important to quantitatively validate the dynamic response of the models in terms of the input impedance (Z = oscillatory pressure/oscillatory flow). To address this need, the authors developed an improved physical model. Using a computer study, the authors first identified the configuration of lumped parameter elements in a model of the systemic circulation; the result was a good match with human aortic input impedance with a minimum number of elements. Design, construction, and testing of a hydraulic model analogous to the computer model followed. Numeric results showed that a three element model with two resistors and one compliance produced reasonable matching without undue complication. The subsequent analogous hydraulic model included adjustable resistors incorporating a sliding plate to vary the flow area through a porous material and an adjustable compliance consisting of a variable-volume air chamber. The response of the hydraulic model compared favorably with other circulation models.
Summary of the key features of seven biomathematical models of human fatigue and performance.
Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F
2004-03-01
Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbély, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
Summary of the key features of seven biomathematical models of human fatigue and performance
NASA Technical Reports Server (NTRS)
Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.
2004-01-01
BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. CONCLUSIONS: Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbely, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.
Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit
Bharioke, Arjun; Chklovskii, Dmitri B.
2015-01-01
Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs. PMID:26247884
Testing the robustness of management decisions to uncertainty: Everglades restoration scenarios.
Fuller, Michael M; Gross, Louis J; Duke-Sylvester, Scott M; Palmer, Mark
2008-04-01
To effectively manage large natural reserves, resource managers must prepare for future contingencies while balancing the often conflicting priorities of different stakeholders. To deal with these issues, managers routinely employ models to project the response of ecosystems to different scenarios that represent alternative management plans or environmental forecasts. Scenario analysis is often used to rank such alternatives to aid the decision making process. However, model projections are subject to uncertainty in assumptions about model structure, parameter values, environmental inputs, and subcomponent interactions. We introduce an approach for testing the robustness of model-based management decisions to the uncertainty inherent in complex ecological models and their inputs. We use relative assessment to quantify the relative impacts of uncertainty on scenario ranking. To illustrate our approach we consider uncertainty in parameter values and uncertainty in input data, with specific examples drawn from the Florida Everglades restoration project. Our examples focus on two alternative 30-year hydrologic management plans that were ranked according to their overall impacts on wildlife habitat potential. We tested the assumption that varying the parameter settings and inputs of habitat index models does not change the rank order of the hydrologic plans. We compared the average projected index of habitat potential for four endemic species and two wading-bird guilds to rank the plans, accounting for variations in parameter settings and water level inputs associated with hypothetical future climates. Indices of habitat potential were based on projections from spatially explicit models that are closely tied to hydrology. For the American alligator, the rank order of the hydrologic plans was unaffected by substantial variation in model parameters. By contrast, simulated major shifts in water levels led to reversals in the ranks of the hydrologic plans in 24.1-30.6% of the projections for the wading bird guilds and several individual species. By exposing the differential effects of uncertainty, relative assessment can help resource managers assess the robustness of scenario choice in model-based policy decisions.
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2015-04-01
Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.
Spiking Models for Level-Invariant Encoding
Brette, Romain
2012-01-01
Levels of ecological sounds vary over several orders of magnitude, but the firing rate and membrane potential of a neuron are much more limited in range. In binaural neurons of the barn owl, tuning to interaural delays is independent of level differences. Yet a monaural neuron with a fixed threshold should fire earlier in response to louder sounds, which would disrupt the tuning of these neurons. How could spike timing be independent of input level? Here I derive theoretical conditions for a spiking model to be insensitive to input level. The key property is a dynamic change in spike threshold. I then show how level invariance can be physiologically implemented, with specific ionic channel properties. It appears that these ingredients are indeed present in monaural neurons of the sound localization pathway of birds and mammals. PMID:22291634
NASA Technical Reports Server (NTRS)
Jian, L. K.; MacNeice, P. J.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; Jackson, B.; Yu, H.-S.; Riley, P.; Sokolov, I. V.
2016-01-01
The prediction of the background global solar wind is a necessary part of space weather forecasting. Several coronal and heliospheric models have been installed and/or recently upgraded at the Community Coordinated Modeling Center (CCMC), including the Wang-Sheely-Arge (WSA)-Enlil model, MHD-Around-a-Sphere (MAS)-Enlil model, Space Weather Modeling Framework (SWMF), and Heliospheric tomography using interplanetary scintillation data. Ulysses recorded the last fast latitudinal scan from southern to northern poles in 2007. By comparing the modeling results with Ulysses observations over seven Carrington rotations, we have extended our third-party validation from the previous near-Earth solar wind to middle to high latitudes, in the same late declining phase of solar cycle 23. Besides visual comparison, wehave quantitatively assessed the models capabilities in reproducing the time series, statistics, and latitudinal variations of solar wind parameters for a specific range of model parameter settings, inputs, and grid configurations available at CCMC. The WSA-Enlil model results vary with three different magnetogram inputs.The MAS-Enlil model captures the solar wind parameters well, despite its underestimation of the speed at middle to high latitudes. The new version of SWMF misses many solar wind variations probably because it uses lower grid resolution than other models. The interplanetary scintillation-tomography cannot capture the latitudinal variations of solar wind well yet. Because the model performance varies with parameter settings which are optimized for different epochs or flow states, the performance metric study provided here can serve as a template that researchers can use to validate the models for the time periods and conditions of interest to them.
Development of a Linear Stirling Model with Varying Heat Inputs
NASA Technical Reports Server (NTRS)
Regan, Timothy F.; Lewandowski, Edward J.
2007-01-01
The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC s non-linear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.
Development of a Linear Stirling System Model with Varying Heat Inputs
NASA Technical Reports Server (NTRS)
Regan, Timothy F.; Lewandowski, Edward J.
2007-01-01
The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC's nonlinear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.
NASA Astrophysics Data System (ADS)
Jian, L. K.; MacNeice, P. J.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; Jackson, B.; Yu, H.-S.; Riley, P.; Sokolov, I. V.
2016-08-01
The prediction of the background global solar wind is a necessary part of space weather forecasting. Several coronal and heliospheric models have been installed and/or recently upgraded at the Community Coordinated Modeling Center (CCMC), including the Wang-Sheely-Arge (WSA)-Enlil model, MHD-Around-a-Sphere (MAS)-Enlil model, Space Weather Modeling Framework (SWMF), and heliospheric tomography using interplanetary scintillation data. Ulysses recorded the last fast latitudinal scan from southern to northern poles in 2007. By comparing the modeling results with Ulysses observations over seven Carrington rotations, we have extended our third-party validation from the previous near-Earth solar wind to middle to high latitudes, in the same late declining phase of solar cycle 23. Besides visual comparison, we have quantitatively assessed the models' capabilities in reproducing the time series, statistics, and latitudinal variations of solar wind parameters for a specific range of model parameter settings, inputs, and grid configurations available at CCMC. The WSA-Enlil model results vary with three different magnetogram inputs. The MAS-Enlil model captures the solar wind parameters well, despite its underestimation of the speed at middle to high latitudes. The new version of SWMF misses many solar wind variations probably because it uses lower grid resolution than other models. The interplanetary scintillation-tomography cannot capture the latitudinal variations of solar wind well yet. Because the model performance varies with parameter settings which are optimized for different epochs or flow states, the performance metric study provided here can serve as a template that researchers can use to validate the models for the time periods and conditions of interest to them.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
Cox, P G; Fagan, M J; Rayfield, E J; Jeffery, N
2011-12-01
Rodents are defined by a uniquely specialized dentition and a highly complex arrangement of jaw-closing muscles. Finite element analysis (FEA) is an ideal technique to investigate the biomechanical implications of these specializations, but it is essential to understand fully the degree of influence of the different input parameters of the FE model to have confidence in the model's predictions. This study evaluates the sensitivity of FE models of rodent crania to elastic properties of the materials, loading direction, and the location and orientation of the models' constraints. Three FE models were constructed of squirrel, guinea pig and rat skulls. Each was loaded to simulate biting on the incisors, and the first and the third molars, with the angle of the incisal bite varied over a range of 45°. The Young's moduli of the bone and teeth components were varied between limits defined by findings from our own and previously published tests of material properties. Geometric morphometrics (GMM) was used to analyse the resulting skull deformations. Bone stiffness was found to have the strongest influence on the results in all three rodents, followed by bite position, and then bite angle and muscle orientation. Tooth material properties were shown to have little effect on the deformation of the skull. The effect of bite position varied between species, with the mesiodistal position of the biting tooth being most important in squirrels and guinea pigs, whereas bilateral vs. unilateral biting had the greatest influence in rats. A GMM analysis of isolated incisor deformations showed that, for all rodents, bite angle is the most important parameter, followed by elastic properties of the tooth. The results here elucidate which input parameters are most important when defining the FE models, but also provide interesting glimpses of the biomechanical differences between the three skulls, which will be fully explored in future publications. © 2011 The Authors. Journal of Anatomy © 2011 Anatomical Society of Great Britain and Ireland.
Nonlinear Dynamic Models in Advanced Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.
Changes in Chesapeake Bay Hypoxia over the Past Century
NASA Astrophysics Data System (ADS)
Friedrichs, M. A.; Kaufman, D. E.; Najjar, R.; Tian, H.; Zhang, B.; Yao, Y.
2016-02-01
The Chesapeake Bay, one of the world's largest estuaries, is among the many coastal systems where hypoxia is a major concern and where dissolved oxygen thus represents a critical factor in determining the health of the Bay's ecosystem. Over the past century, the population of the Chesapeake Bay region has almost quadrupled, greatly modifying land cover and management practices within the watershed. Simultaneously, the Chesapeake Bay has been experiencing a high degree of climate change, including increases in temperature, precipitation, and precipitation intensity. Together, these changes have resulted in significantly increased riverine nutrient inputs to the Bay. In order to examine how interdecadal changes in riverine nitrogen input affects biogeochemical cycling and dissolved oxygen concentrations in Chesapeake Bay, a land-estuarine-ocean biogeochemical modeling system has been developed for this region. Riverine inputs of nitrogen to the Bay are computed from a terrestrial ecosystem model (the Dynamic Land Ecosystem Model; DLEM) that resolves riverine discharge variability on scales of days to years. This temporally varying discharge is then used as input to the estuarine-carbon-biogeochemical model embedded in the Regional Modeling System (ROMS), which provides estimates of the oxygen concentrations and nitrogen fluxes within the Bay as well as advective exports from the Bay to the adjacent Mid-Atlantic Bight shelf. Simulation results from this linked modeling system for the present (early 2000s) have been extensively evaluated with in situ and remotely sensed data. Longer-term simulations are used to isolate the effect of increased riverine nitrogen loading on dissolved oxygen concentrations and biogeochemical cycling within the Chesapeake Bay.
Neuronal Spike Timing Adaptation Described with a Fractional Leaky Integrate-and-Fire Model
Teka, Wondimu; Marinov, Toma M.; Santamaria, Fidel
2014-01-01
The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation. PMID:24675903
NASA Lewis steady-state heat pipe code users manual
NASA Technical Reports Server (NTRS)
Tower, Leonard K.; Baker, Karl W.; Marks, Timothy S.
1992-01-01
The NASA Lewis heat pipe code was developed to predict the performance of heat pipes in the steady state. The code can be used as a design tool on a personal computer or with a suitable calling routine, as a subroutine for a mainframe radiator code. A variety of wick structures, including a user input option, can be used. Heat pipes with multiple evaporators, condensers, and adiabatic sections in series and with wick structures that differ among sections can be modeled. Several working fluids can be chosen, including potassium, sodium, and lithium, for which monomer-dimer equilibrium is considered. The code incorporates a vapor flow algorithm that treats compressibility and axially varying heat input. This code facilitates the determination of heat pipe operating temperatures and heat pipe limits that may be encountered at the specified heat input and environment temperature. Data are input to the computer through a user-interactive input subroutine. Output, such as liquid and vapor pressures and temperatures, is printed at equally spaced axial positions along the pipe as determined by the user.
NASA Lewis steady-state heat pipe code users manual
NASA Astrophysics Data System (ADS)
Tower, Leonard K.; Baker, Karl W.; Marks, Timothy S.
1992-06-01
The NASA Lewis heat pipe code was developed to predict the performance of heat pipes in the steady state. The code can be used as a design tool on a personal computer or with a suitable calling routine, as a subroutine for a mainframe radiator code. A variety of wick structures, including a user input option, can be used. Heat pipes with multiple evaporators, condensers, and adiabatic sections in series and with wick structures that differ among sections can be modeled. Several working fluids can be chosen, including potassium, sodium, and lithium, for which monomer-dimer equilibrium is considered. The code incorporates a vapor flow algorithm that treats compressibility and axially varying heat input. This code facilitates the determination of heat pipe operating temperatures and heat pipe limits that may be encountered at the specified heat input and environment temperature. Data are input to the computer through a user-interactive input subroutine. Output, such as liquid and vapor pressures and temperatures, is printed at equally spaced axial positions along the pipe as determined by the user.
Luo, Mei; Wang, Hao; Lyu, Zhi
2017-12-01
Species distribution models (SDMs) are widely used by researchers and conservationists. Results of prediction from different models vary significantly, which makes users feel difficult in selecting models. In this study, we evaluated the performance of two commonly used SDMs, the Biomod2 and Maximum Entropy (MaxEnt), with real presence/absence data of giant panda, and used three indicators, i.e., area under the ROC curve (AUC), true skill statistics (TSS), and Cohen's Kappa, to evaluate the accuracy of the two model predictions. The results showed that both models could produce accurate predictions with adequate occurrence inputs and simulation repeats. Comparedto MaxEnt, Biomod2 made more accurate prediction, especially when occurrence inputs were few. However, Biomod2 was more difficult to be applied, required longer running time, and had less data processing capability. To choose the right models, users should refer to the error requirements of their objectives. MaxEnt should be considered if the error requirement was clear and both models could achieve, otherwise, we recommend the use of Biomod2 as much as possible.
Relationships between net primary productivity and forest stand age in U.S. forests
Liming He; Jing M. Chen; Yude Pan; Richard Birdsey; Jens Kattge
2012-01-01
Net primary productivity (NPP) is a key flux in the terrestrial ecosystem carbon balance, as it summarizes the autotrophic input into the system. Forest NPP varies predictably with stand age, and quantitative information on the NPP-age relationship for different regions and forest types is therefore fundamentally important for forest carbon cycle modeling. We used four...
Marvuglia, Antonino; Kanevski, Mikhail; Benetto, Enrico
2015-10-01
Toxicity characterization of chemical emissions in Life Cycle Assessment (LCA) is a complex task which usually proceeds via multimedia (fate, exposure and effect) models attached to models of dose-response relationships to assess the effects on target. Different models and approaches do exist, but all require a vast amount of data on the properties of the chemical compounds being assessed, which are hard to collect or hardly publicly available (especially for thousands of less common or newly developed chemicals), therefore hampering in practice the assessment in LCA. An example is USEtox, a consensual model for the characterization of human toxicity and freshwater ecotoxicity. This paper places itself in a line of research aiming at providing a methodology to reduce the number of input parameters necessary to run multimedia fate models, focusing in particular to the application of the USEtox toxicity model. By focusing on USEtox, in this paper two main goals are pursued: 1) performing an extensive exploratory analysis (using dimensionality reduction techniques) of the input space constituted by the substance-specific properties at the aim of detecting particular patterns in the data manifold and estimating the dimension of the subspace in which the data manifold actually lies; and 2) exploring the application of a set of linear models, based on partial least squares (PLS) regression, as well as a nonlinear model (general regression neural network--GRNN) in the seek for an automatic selection strategy of the most informative variables according to the modelled output (USEtox factor). After extensive analysis, the intrinsic dimension of the input manifold has been identified between three and four. The variables selected as most informative may vary according to the output modelled and the model used, but for the toxicity factors modelled in this paper the input variables selected as most informative are coherent with prior expectations based on scientific knowledge of toxicity factors modelling. Thus the outcomes of the analysis are promising for the future application of the approach to other portions of the model, affected by important data gaps, e.g., to the calculation of human health effect factors. Copyright © 2015. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Zhou, BeiBei; Wang, QuanJiu
2017-09-01
Studies on solute transport under different pore water velocity and solute input methods in undisturbed soil could play instructive roles for crop production. Based on the experiments in the laboratory, the effect of solute input methods with small pulse input and large pulse input, as well as four pore water velocities, on chloride transport in the undisturbed soil columns obtained from the Loess Plateau under controlled condition was studied. Chloride breakthrough curves (BTCs) were generated using the miscible displacement method under water-saturated, steady flow conditions. Using the 0.15 mol L-1 CaCl2 solution as a tracer, a small pulse (0.1 pore volumes) was first induced, and then, after all the solution was wash off, a large pulse (0.5 pore volumes) was conducted. The convection-dispersion equation (CDE) and the two-region model (T-R) were used to describe the BTCs, and their prediction accuracies and fitted parameters were compared as well. All the BTCs obtained for the different input methods and the four pore water velocities were all smooth. However, the shapes of the BTCs varied greatly; small pulse inputs resulted in more rapid attainment of peak values that appeared earlier with increases in pore water velocity, whereas large pulse inputs resulted in an opposite trend. Both models could fit the experimental data well, but the prediction accuracy of the T-R was better. The values of the dispersivity, λ, calculated from the dispersion coefficient obtained from the CDE were about one order of magnitude larger than those calculated from the dispersion coefficient given by the T-R, but the calculated Peclet number, Pe, was lower. The mobile-immobile partition coefficient, β, decreased, while the mass exchange coefficient increased with increases in pore water velocity.
NASA Astrophysics Data System (ADS)
Algrain, Marcelo C.; Powers, Richard M.
1997-05-01
A case study, written in a tutorial manner, is presented where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. Models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). The predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are the desired attitude angles and rate set points. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade- off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.
Functional Data Analysis for Dynamical System Identification of Behavioral Processes
Trail, Jessica B.; Collins, Linda M.; Rivera, Daniel E.; Li, Runze; Piper, Megan E.; Baker, Timothy B.
2014-01-01
Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time, but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate two innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention. PMID:24079929
NASA Astrophysics Data System (ADS)
Lewis, Elizabeth; Kilsby, Chris; Fowler, Hayley
2014-05-01
The impact of climate change on hydrological systems requires further quantification in order to inform water management. This study intends to conduct such analysis using hydrological models. Such models are of varying forms, of which conceptual, lumped parameter models and physically-based models are two important types. The majority of hydrological studies use conceptual models calibrated against measured river flow time series in order to represent catchment behaviour. This method often shows impressive results for specific problems in gauged catchments. However, the results may not be robust under non-stationary conditions such as climate change, as physical processes and relationships amenable to change are not accounted for explicitly. Moreover, conceptual models are less readily applicable to ungauged catchments, in which hydrological predictions are also required. As such, the physically based, spatially distributed model SHETRAN is used in this study to develop a robust and reliable framework for modelling historic and future behaviour of gauged and ungauged catchments across the whole of Great Britain. In order to achieve this, a large array of data completely covering Great Britain for the period 1960-2006 has been collated and efficiently stored ready for model input. The data processed include a DEM, rainfall, PE and maps of geology, soil and land cover. A desire to make the modelling system easy for others to work with led to the development of a user-friendly graphical interface. This allows non-experts to set up and run a catchment model in a few seconds, a process that can normally take weeks or months. The quality and reliability of the extensive dataset for modelling hydrological processes has also been evaluated. One aspect of this has been an assessment of error and uncertainty in rainfall input data, as well as the effects of temporal resolution in precipitation inputs on model calibration. SHETRAN has been updated to accept gridded rainfall inputs, and UKCP09 gridded daily rainfall data has been disaggregated using hourly records to analyse the implications of using realistic sub-daily variability. Furthermore, the development of a comprehensive dataset and computationally efficient means of setting up and running catchment models has allowed for examination of how a robust parameter scheme may be derived. This analysis has been based on collective parameterisation of multiple catchments in contrasting hydrological settings and subject to varied processes. 350 gauged catchments all over the UK have been simulated, and a robust set of parameters is being sought by examining the full range of hydrological processes and calibrating to a highly diverse flow data series. The modelling system will be used to generate flow time series based on historical input data and also downscaled Regional Climate Model (RCM) forecasts using the UKCP09 Weather Generator. This will allow for analysis of flow frequency and associated future changes, which cannot be determined from the instrumental record or from lumped parameter model outputs calibrated only to historical catchment behaviour. This work will be based on the existing and functional modelling system described following some further improvements to calibration, particularly regarding simulation of groundwater-dominated catchments.
Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.
Ponzi, Adam; Wickens, Jeff
2012-01-01
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.
Input Dependent Cell Assembly Dynamics in a Model of the Striatal Medium Spiny Neuron Network
Ponzi, Adam; Wickens, Jeff
2012-01-01
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior. PMID:22438838
Hess, Lisa M; Rajan, Narayan; Winfree, Katherine; Davey, Peter; Ball, Mark; Knox, Hediyyih; Graham, Christopher
2015-12-01
Health technology assessment is not required for regulatory submission or approval in either the United States (US) or Japan. This study was designed as a cross-country evaluation of cost analyses conducted in the US and Japan based on the PRONOUNCE phase III lung cancer trial, which compared pemetrexed plus carboplatin followed by pemetrexed (PemC) versus paclitaxel plus carboplatin plus bevacizumab followed by bevacizumab (PCB). Two cost analyses were conducted in accordance with International Society For Pharmacoeconomics and Outcomes Research good research practice standards. Costs were obtained based on local pricing structures; outcomes were considered equivalent based on the PRONOUNCE trial results. Other inputs were included from the trial data (e.g., toxicity rates) or from local practice sources (e.g., toxicity management). The models were compared across key input and transferability factors. Despite differences in local input data, both models demonstrated a similar direction, with the cost of PemC being consistently lower than the cost of PCB. The variation in individual input parameters did affect some of the specific categories, such as toxicity, and impacted sensitivity analyses, with the cost differential between comparators being greater in Japan than in the US. When economic models are based on clinical trial data, many inputs and outcomes are held consistent. The alterable inputs were not in and of themselves large enough to significantly impact the results between countries, which were directionally consistent with greater variation seen in sensitivity analyses. The factors that vary across jurisdictions, even when minor, can have an impact on trial-based economic analyses. Eli Lilly and Company.
Jackson, Rachel W; Collins, Steven H
2015-09-01
Techniques proposed for assisting locomotion with exoskeletons have often included a combination of active work input and passive torque support, but the physiological effects of different assistance techniques remain unclear. We performed an experiment to study the independent effects of net exoskeleton work and average exoskeleton torque on human locomotion. Subjects wore a unilateral ankle exoskeleton and walked on a treadmill at 1.25 m·s(-1) while net exoskeleton work rate was systematically varied from -0.054 to 0.25 J·kg(-1)·s(-1), with constant (0.12 N·m·kg(-1)) average exoskeleton torque, and while average exoskeleton torque was systematically varied from approximately zero to 0.18 N·m·kg(-1), with approximately zero net exoskeleton work. We measured metabolic rate, center-of-mass mechanics, joint mechanics, and muscle activity. Both techniques reduced effort-related measures at the assisted ankle, but this form of work input reduced metabolic cost (-17% with maximum net work input) while this form of torque support increased metabolic cost (+13% with maximum average torque). Disparate effects on metabolic rate seem to be due to cascading effects on whole body coordination, particularly related to assisted ankle muscle dynamics and the effects of trailing ankle behavior on leading leg mechanics during double support. It would be difficult to predict these results using simple walking models without muscles or musculoskeletal models that assume fixed kinematics or kinetics. Data from this experiment can be used to improve predictive models of human neuromuscular adaptation and guide the design of assistive devices. Copyright © 2015 the American Physiological Society.
Phase transformations at interfaces: Observations from atomistic modeling
Frolov, T.; Asta, M.; Mishin, Y.
2016-10-01
Here, we review the recent progress in theoretical understanding and atomistic computer simulations of phase transformations in materials interfaces, focusing on grain boundaries (GBs) in metallic systems. Recently developed simulation approaches enable the search and structural characterization of GB phases in single-component metals and binary alloys, calculation of thermodynamic properties of individual GB phases, and modeling of the effect of the GB phase transformations on GB kinetics. Atomistic simulations demonstrate that the GB transformations can be induced by varying the temperature, loading the GB with point defects, or varying the amount of solute segregation. The atomic-level understanding obtained from suchmore » simulations can provide input for further development of thermodynamics theories and continuous models of interface phase transformations while simultaneously serving as a testing ground for validation of theories and models. They can also help interpret and guide experimental work in this field.« less
NASA Astrophysics Data System (ADS)
Lumentut, M. F.; Howard, I. M.
2013-03-01
Power harvesters that extract energy from vibrating systems via piezoelectric transduction show strong potential for powering smart wireless sensor devices in applications of health condition monitoring of rotating machinery and structures. This paper presents an analytical method for modelling an electromechanical piezoelectric bimorph beam with tip mass under two input base transverse and longitudinal excitations. The Euler-Bernoulli beam equations were used to model the piezoelectric bimorph beam. The polarity-electric field of the piezoelectric element is excited by the strain field caused by base input excitation, resulting in electrical charge. The governing electromechanical dynamic equations were derived analytically using the weak form of the Hamiltonian principle to obtain the constitutive equations. Three constitutive electromechanical dynamic equations based on independent coefficients of virtual displacement vectors were formulated and then further modelled using the normalised Ritz eigenfunction series. The electromechanical formulations include both the series and parallel connections of the piezoelectric bimorph. The multi-mode frequency response functions (FRFs) under varying electrical load resistance were formulated using Laplace transformation for the multi-input mechanical vibrations to provide the multi-output dynamic displacement, velocity, voltage, current and power. The experimental and theoretical validations reduced for the single mode system were shown to provide reasonable predictions. The model results from polar base excitation for off-axis input motions were validated with experimental results showing the change to the electrical power frequency response amplitude as a function of excitation angle, with relevance for practical implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
NASA Technical Reports Server (NTRS)
Martino, J. P.; Lenz, R. C., Jr.; Chen, K. L.
1979-01-01
A cross impact model of the U.S. telecommunications system was developed. For this model, it was necessary to prepare forecasts of the major segments of the telecommunications system, such as satellites, telephone, TV, CATV, radio broadcasting, etc. In addition, forecasts were prepared of the traffic generated by a variety of new or expanded services, such as electronic check clearing and point of sale electronic funds transfer. Finally, the interactions among the forecasts were estimated (the cross impacts). Both the forecasts and the cross impacts were used as inputs to the cross impact model, which could then be used to stimulate the future growth of the entire U.S. telecommunications system. By varying the inputs, technology changes or policy decisions with regard to any segment of the system could be evaluated in the context of the remainder of the system. To illustrate the operation of the model, a specific study was made of the deployment of fiber optics, throughout the telecommunications system.
Modeling maintenance-strategies with rainbow nets
NASA Astrophysics Data System (ADS)
Johnson, Allen M., Jr.; Schoenfelder, Michael A.; Lebold, David
The Rainbow net (RN) modeling technique offers a promising alternative to traditional reliability modeling techniques. RNs are evaluated through discrete event simulation. Using specialized tokens to represent systems and faults, an RN models the fault-handling behavior of an inventory of systems produced over time. In addition, a portion of the RN represents system repair and the vendor's spare part production. Various dependability parameters are measured and used to calculate the impact of four variations of maintenance strategies. Input variables are chosen to demonstrate the technique. The number of inputs allowed to vary is intentionally constrained to limit the volume of data presented and to avoid overloading the reader with complexity. If only availability data were reviewed, it is possible that the conclusion might be drawn that both strategies are about the same and therefore the cheaper strategy from the vendor's perspective may be chosen. The richer set of metrics provided by the RN simulation gives greater insight into the problem, which leads to better decisions. By using RNs, the impact of several different variables is integrated.
Two Unipolar Terminal-Attractor-Based Associative Memories
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Wu, Chwan-Hwa
1995-01-01
Two unipolar mathematical models of electronic neural network functioning as terminal-attractor-based associative memory (TABAM) developed. Models comprise sets of equations describing interactions between time-varying inputs and outputs of neural-network memory, regarded as dynamical system. Simplifies design and operation of optoelectronic processor to implement TABAM performing associative recall of images. TABAM concept described in "Optoelectronic Terminal-Attractor-Based Associative Memory" (NPO-18790). Experimental optoelectronic apparatus that performed associative recall of binary images described in "Optoelectronic Inner-Product Neural Associative Memory" (NPO-18491).
The AURIC-M Atmospheric Transmission and Radiance Model
1993-01-01
ZAER , ZNEW, and ZNEWV, and used locally to set the array ZMDL for use elsewhere in the program. 3) The user input data layers for Model 7, which...including the layering. For this reason, the original calculation layer altitudes were kept in place (array ZAER ), and new ones were added in a separate...variable (ZAUR), used only when the AURIC mode is on. The ZAER altitudes vary in 1 km steps from 0 to 25 kin, in 5 km steps up through 50 kin, with
Salciarini, D.; Godt, J.W.; Savage, W.Z.; Conversini, P.; Baum, R.L.; Michael, J.A.
2006-01-01
We model the rainfall-induced initiation of shallow landslides over a broad region using a deterministic approach, the Transient Rainfall Infiltration and Grid-based Slope-stability (TRIGRS) model that couples an infinite-slope stability analysis with a one-dimensional analytical solution for transient pore pressure response to rainfall infiltration. This model permits the evaluation of regional shallow landslide susceptibility in a Geographic Information System framework, and we use it to analyze susceptibility to shallow landslides in an area in the eastern Umbria Region of central Italy. As shown on a landslide inventory map produced by the Italian National Research Council, the area has been affected in the past by shallow landslides, many of which have transformed into debris flows. Input data for the TRIGRS model include time-varying rainfall, topographic slope, colluvial thickness, initial water table depth, and material strength and hydraulic properties. Because of a paucity of input data, we focus on parametric analyses to calibrate and test the model and show the effect of variation in material properties and initial water table conditions on the distribution of simulated instability in the study area in response to realistic rainfall. Comparing the results with the shallow landslide inventory map, we find more than 80% agreement between predicted shallow landslide susceptibility and the inventory, despite the paucity of input data.
NASA Astrophysics Data System (ADS)
Shelomentsev, A. G.; Medvedev, M. A.; Berg, D. B.; Lapshina, S. N.; Taubayev, A. A.; Davletbaev, R. H.; Savina, D. V.
2017-12-01
Present study is devoted to the development of competition life cycle mathematical model in the closed business community with limited resources. Growth of each agent is determined by the balance of input and output resource flows: input (cash) flow W is covering the variable V and constant C costs and growth dA/dt of the agent's assets A. Value of V is proportional to assets A that allows us to write down a first order non-stationary differential equation of the agent growth. Model includes the number of such equations due to the number of agents. The amount of resources that is available for agents vary in time. The balances of their input and output flows are changing correspondingly to the different stages of the competition life cycle. According to the theory of systems, the most complete description of any object or process is the model of its life cycle. Such a model describes all stages of its development: from the appearance ("birth") through development ("growth") to extinction ("death"). The model of the evolution of an individual firm, not contradicting the economic meaning of events actually observed in the market, is the desired result from modern AVMs for applied use. With a correct description of the market, rules for participants' actions, restrictions, forecasts can be obtained, which modern mathematics and the economy can not give.
Evaluation of globally available precipitation data products as input for water balance models
NASA Astrophysics Data System (ADS)
Lebrenz, H.; Bárdossy, A.
2009-04-01
Subject of this study is the evaluation of globally available precipitation data products, which are intended to be used as input variables for water balance models in ungauged basins. The selected data sources are a) the Global Precipitation Climatology Centre (GPCC), b) the Global Precipitation Climatology Project (GPCP) and c) the Climate Research Unit (CRU), resulting into twelve globally available data products. The data products imply different data bases, different derivation routines and varying resolutions in time and space. For validation purposes, the ground data from South Africa were screened on homogeneity and consistency by various tests and an outlier detection using multi-linear regression was performed. External Drift Kriging was subsequently applied on the ground data and the resulting precipitation arrays were compared to the different products with respect to quantity and variance.
VORTAB - A data-tablet method of developing input data for the VORLAX program
NASA Technical Reports Server (NTRS)
Denn, F. M.
1979-01-01
A method of developing an input data file for use in the aerodynamic analysis of a complete airplane with the VORLAX computer program is described. The hardware consists of an interactive graphics terminal equipped with a graphics tablet. Software includes graphics routines from the Tektronix PLOT 10 package as well as the VORTAB program described. The user determines the size and location of each of the major panels for the aircraft before using the program. Data is entered both from the terminal keyboard and the graphics tablet. The size of the resulting data file is dependent on the complexity of the model and can vary from ten to several hundred card images. After the data are entered, two programs READB and PLOTB, are executed which plot the configuration allowing visual inspection of the model.
Evans, Alistair R.; McHenry, Colin R.
2015-01-01
The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter
Earth’s terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsecmore » (~ 1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. As a result, we anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models.« less
Meter circuit for tuning RF amplifiers
NASA Technical Reports Server (NTRS)
Longthorne, J. E.
1973-01-01
Circuit computes and indicates efficiency of RF amplifier as inputs and other parameters are varied. Voltage drop across internal resistance of ammeter is amplified by operational amplifier and applied to one multiplier input. Other input is obtained through two resistors from positive terminal of power supply.
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Cabaraban, Maria Theresa I; Kroll, Charles N; Hirabayashi, Satoshi; Nowak, David J
2013-05-01
A distributed adaptation of i-Tree Eco was used to simulate dry deposition in an urban area. This investigation focused on the effects of varying temperature, LAI, and NO2 concentration inputs on estimated NO2 dry deposition to trees in Baltimore, MD. A coupled modeling system is described, wherein WRF provided temperature and LAI fields, and CMAQ provided NO2 concentrations. A base case simulation was conducted using built-in distributed i-Tree Eco tools, and simulations using different inputs were compared against this base case. Differences in land cover classification and tree cover between the distributed i-Tree Eco and WRF resulted in changes in estimated LAI, which in turn resulted in variations in simulated NO2 dry deposition. Estimated NO2 removal decreased when CMAQ-derived concentration was applied to the distributed i-Tree Eco simulation. Discrepancies in temperature inputs did little to affect estimates of NO2 removal by dry deposition to trees in Baltimore. Copyright © 2013 Elsevier Ltd. All rights reserved.
Measured Polarized Spectral Responsivity of JPSS J1 VIIRS Using the NIST T-SIRCUS
NASA Technical Reports Server (NTRS)
McIntire, Jeff; Young, James B.; Moyer, David; Waluschka, Eugene; Xiong, Xiaoxiong
2015-01-01
Recent pre-launch measurements performed on the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) using the National Institute of Standards and Technology (NIST) Traveling Spectral Irradiance and Radiance Responsivity Calibrations Using Uniform Sources (T-SIRCUS) monochromatic source have provided wavelength dependent polarization sensitivity for select spectral bands and viewing conditions. Measurements were made at a number of input linear polarization states (twelve in total) and initially at thirteen wavelengths across the bandpass (later expanded to seventeen for some cases). Using the source radiance information collected by an external monitor, a spectral responsivity function was constructed for each input linear polarization state. Additionally, an unpolarized spectral responsivity function was derived from these polarized measurements. An investigation of how the centroid, bandwidth, and detector responsivity vary with polarization state was weighted by two model input spectra to simulate both ground measurements as well as expected on-orbit conditions. These measurements will enhance our understanding of VIIRS polarization sensitivity, improve the design for future flight models, and provide valuable data to enhance product quality in the post-launch phase.
Expendable vs reusable propulsion systems cost sensitivity
NASA Technical Reports Server (NTRS)
Hamaker, Joseph W.; Dodd, Glenn R.
1989-01-01
One of the key trade studies that must be considered when studying any new space transportation hardware is whether to go reusable or expendable. An analysis is presented here for such a trade relative to a proposed Liquid Rocket Booster which is being studied at MSFC. The assumptions or inputs to the trade were developed and integrated into a model that compares the Life-Cycle Costs of both a reusable LRB and an expendable LRB. Sensitivities were run by varying the input variables to see their effect on total cost. In addition a Monte-Carlo simulation was run to determine the amount of cost risk that may be involved in a decision to reuse or expend.
Homeostasis, singularities, and networks.
Golubitsky, Martin; Stewart, Ian
2017-01-01
Homeostasis occurs in a biological or chemical system when some output variable remains approximately constant as an input parameter [Formula: see text] varies over some interval. We discuss two main aspects of homeostasis, both related to the effect of coordinate changes on the input-output map. The first is a reformulation of homeostasis in the context of singularity theory, achieved by replacing 'approximately constant over an interval' by 'zero derivative of the output with respect to the input at a point'. Unfolding theory then classifies all small perturbations of the input-output function. In particular, the 'chair' singularity, which is especially important in applications, is discussed in detail. Its normal form and universal unfolding [Formula: see text] is derived and the region of approximate homeostasis is deduced. The results are motivated by data on thermoregulation in two species of opossum and the spiny rat. We give a formula for finding chair points in mathematical models by implicit differentiation and apply it to a model of lateral inhibition. The second asks when homeostasis is invariant under appropriate coordinate changes. This is false in general, but for network dynamics there is a natural class of coordinate changes: those that preserve the network structure. We characterize those nodes of a given network for which homeostasis is invariant under such changes. This characterization is determined combinatorially by the network topology.
Impact of input data (in)accuracy on overestimation of visible area in digital viewshed models
Klouček, Tomáš; Šímová, Petra
2018-01-01
Viewshed analysis is a GIS tool in standard use for more than two decades to perform numerous scientific and practical tasks. The reliability of the resulting viewshed model depends on the computational algorithm and the quality of the input digital surface model (DSM). Although many studies have dealt with improving viewshed algorithms, only a few studies have focused on the effect of the spatial accuracy of input data. Here, we compare simple binary viewshed models based on DSMs having varying levels of detail with viewshed models created using LiDAR DSM. The compared DSMs were calculated as the sums of digital terrain models (DTMs) and layers of forests and buildings with expertly assigned heights. Both elevation data and the visibility obstacle layers were prepared using digital vector maps differing in scale (1:5,000, 1:25,000, and 1:500,000) as well as using a combination of a LiDAR DTM with objects vectorized on an orthophotomap. All analyses were performed for 104 sample locations of 5 km2, covering areas from lowlands to mountains and including farmlands as well as afforested landscapes. We worked with two observer point heights, the first (1.8 m) simulating observation by a person standing on the ground and the second (80 m) as observation from high structures such as wind turbines, and with five estimates of forest heights (15, 20, 25, 30, and 35 m). At all height estimations, all of the vector-based DSMs used resulted in overestimations of visible areas considerably greater than those from the LiDAR DSM. In comparison to the effect from input data scale, the effect from object height estimation was shown to be secondary. PMID:29844982
Impact of input data (in)accuracy on overestimation of visible area in digital viewshed models.
Lagner, Ondřej; Klouček, Tomáš; Šímová, Petra
2018-01-01
Viewshed analysis is a GIS tool in standard use for more than two decades to perform numerous scientific and practical tasks. The reliability of the resulting viewshed model depends on the computational algorithm and the quality of the input digital surface model (DSM). Although many studies have dealt with improving viewshed algorithms, only a few studies have focused on the effect of the spatial accuracy of input data. Here, we compare simple binary viewshed models based on DSMs having varying levels of detail with viewshed models created using LiDAR DSM. The compared DSMs were calculated as the sums of digital terrain models (DTMs) and layers of forests and buildings with expertly assigned heights. Both elevation data and the visibility obstacle layers were prepared using digital vector maps differing in scale (1:5,000, 1:25,000, and 1:500,000) as well as using a combination of a LiDAR DTM with objects vectorized on an orthophotomap. All analyses were performed for 104 sample locations of 5 km 2 , covering areas from lowlands to mountains and including farmlands as well as afforested landscapes. We worked with two observer point heights, the first (1.8 m) simulating observation by a person standing on the ground and the second (80 m) as observation from high structures such as wind turbines, and with five estimates of forest heights (15, 20, 25, 30, and 35 m). At all height estimations, all of the vector-based DSMs used resulted in overestimations of visible areas considerably greater than those from the LiDAR DSM. In comparison to the effect from input data scale, the effect from object height estimation was shown to be secondary.
Segmentation, dynamic storage, and variable loading on CDC equipment
NASA Technical Reports Server (NTRS)
Tiffany, S. H.
1980-01-01
Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.
NASA Technical Reports Server (NTRS)
Hall, A. Daniel (Inventor); Davies, Francis J. (Inventor)
2007-01-01
Method and system are disclosed for determining individual string resistance in a network of strings when the current through a parallel connected string is unknown and when the voltage across a series connected string is unknown. The method/system of the invention involves connecting one or more frequency-varying impedance components with known electrical characteristics to each string and applying a frequency-varying input signal to the network of strings. The frequency-varying impedance components may be one or more capacitors, inductors, or both, and are selected so that each string is uniquely identifiable in the output signal resulting from the frequency-varying input signal. Numerical methods, such as non-linear regression, may then be used to resolve the resistance associated with each string.
Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter; ...
2016-01-22
Earth’s terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsecmore » (~ 1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. As a result, we anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models.« less
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter; Zeng, Xubin; Troch, Peter A.; Niu, Guo-Yue; Williams, Zachary; Brunke, Michael A.; Gochis, David
2016-03-01
Earth's terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsec (˜1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. We anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models. This article was corrected on 2 FEB 2016. See the end of the full text for details.
Fechter, Dominik; Storch, Ilse
2014-01-01
Due to legislative protection, many species, including large carnivores, are currently recolonizing Europe. To address the impending human-wildlife conflicts in advance, predictive habitat models can be used to determine potentially suitable habitat and areas likely to be recolonized. As field data are often limited, quantitative rule based models or the extrapolation of results from other studies are often the techniques of choice. Using the wolf (Canis lupus) in Germany as a model for habitat generalists, we developed a habitat model based on the location and extent of twelve existing wolf home ranges in Eastern Germany, current knowledge on wolf biology, different habitat modeling techniques and various input data to analyze ten different input parameter sets and address the following questions: (1) How do a priori assumptions and different input data or habitat modeling techniques affect the abundance and distribution of potentially suitable wolf habitat and the number of wolf packs in Germany? (2) In a synthesis across input parameter sets, what areas are predicted to be most suitable? (3) Are existing wolf pack home ranges in Eastern Germany consistent with current knowledge on wolf biology and habitat relationships? Our results indicate that depending on which assumptions on habitat relationships are applied in the model and which modeling techniques are chosen, the amount of potentially suitable habitat estimated varies greatly. Depending on a priori assumptions, Germany could accommodate between 154 and 1769 wolf packs. The locations of the existing wolf pack home ranges in Eastern Germany indicate that wolves are able to adapt to areas densely populated by humans, but are limited to areas with low road densities. Our analysis suggests that predictive habitat maps in general, should be interpreted with caution and illustrates the risk for habitat modelers to concentrate on only one selection of habitat factors or modeling technique. PMID:25029506
Project Management Using Modern Guidance, Navigation and Control Theory
NASA Technical Reports Server (NTRS)
Hill, Terry
2010-01-01
The idea of control theory and its application to project management is not new, however literature on the topic and real-world applications is not as readily available and comprehensive in how all the principals of Guidance, Navigation and Control (GN&C) apply. This paper will address how the fundamental principals of modern GN&C Theory have been applied to NASA's Constellation Space Suit project and the results in the ability to manage the project within cost, schedule and budget. A s with physical systems, projects can be modeled and managed with the same guiding principles of GN&C as if it were a complex vehicle, system or software with time-varying processes, at times non-linear responses, multiple data inputs of varying accuracy and a range of operating points. With such systems the classic approach could be applied to small and well-defined projects; however with larger, multi-year projects involving multiple organizational structures, external influences and a multitude of diverse resources, then modern control theory is required to model and control the project. The fundamental principals of G N&C stated that a system is comprised of these basic core concepts: State, Behavior, Control system, Navigation system, Guidance and Planning Logic, Feedback systems. The state of a system is a definition of the aspects of the dynamics of the system that can change, such as position, velocity, acceleration, coordinate-based attitude, temperature, etc. The behavior of the system is more of what changes are possible rather than what can change, which is captured in the state of the system. The behavior of a system is captured in the system modeling and if properly done, will aid in accurate system performance prediction in the future. The Control system understands the state and behavior of the system and feedback systems to adjust the control inputs into the system. The Navigation system takes the multiple data inputs and based upon a priori knowledge of the input, will develop a statistical-based weighting of the input to determine where the system currently is located. Guidance and Planning logic of the system with the understanding of where it is (provided by the navigation system) will in turn determine where it needs to be and how to get there. Lastly, the system Feedback system is the right arm of the control system to allow it to affect change in the overall system and therefore it is critical to not only correctly identify the system feedback inputs but also the system response to the feedback inputs. And with any systems project it is critical that the objective of the system be clearly defined for not only planning but to be used to measure performance and to aid in the guidance of the system or project.
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
EzGal: A Flexible Interface for Stellar Population Synthesis Models
NASA Astrophysics Data System (ADS)
Mancone, Conor L.; Gonzalez, Anthony H.
2012-06-01
We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the level. Similarly, the most problematic regime for SPS modeling is for young ages (≲2 Gyr) and long wavelengths (λ ≳ 7500 Å), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model set and should therefore be interpreted as general uncertainties in SPS modeling. Finally, we connect our results to a more physically motivated example by generating CSPs with a star-formation history matching the global star-formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift-dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations with models as a function of redshift.
Vero, S E; Ibrahim, T G; Creamer, R E; Grant, J; Healy, M G; Henry, T; Kramers, G; Richards, K G; Fenton, O
2014-12-01
The true efficacy of a programme of agricultural mitigation measures within a catchment to improve water quality can be determined only after a certain hydrologic time lag period (subsequent to implementation) has elapsed. As the biophysical response to policy is not synchronous, accurate estimates of total time lag (unsaturated and saturated) become critical to manage the expectations of policy makers. The estimation of the vertical unsaturated zone component of time lag is vital as it indicates early trends (initial breakthrough), bulk (centre of mass) and total (Exit) travel times. Typically, estimation of time lag through the unsaturated zone is poor, due to the lack of site specific soil physical data, or by assuming saturated conditions. Numerical models (e.g. Hydrus 1D) enable estimates of time lag with varied levels of input data. The current study examines the consequences of varied soil hydraulic and meteorological complexity on unsaturated zone time lag estimates using simulated and actual soil profiles. Results indicated that: greater temporal resolution (from daily to hourly) of meteorological data was more critical as the saturated hydraulic conductivity of the soil decreased; high clay content soils failed to converge reflecting prevalence of lateral component as a contaminant pathway; elucidation of soil hydraulic properties was influenced by the complexity of soil physical data employed (textural menu, ROSETTA, full and partial soil water characteristic curves), which consequently affected time lag ranges; as the importance of the unsaturated zone increases with respect to total travel times the requirements for high complexity/resolution input data become greater. The methodology presented herein demonstrates that decisions made regarding input data and landscape position will have consequences for the estimated range of vertical travel times. Insufficiencies or inaccuracies regarding such input data can therefore mislead policy makers regarding the achievability of water quality targets. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
Using the power balance model to simulate cross-country skiing on varying terrain.
Moxnes, John F; Sandbakk, Oyvind; Hausken, Kjell
2014-01-01
The current study adapts the power balance model to simulate cross-country skiing on varying terrain. We assumed that the skier's locomotive power at a self-chosen pace is a function of speed, which is impacted by friction, incline, air drag, and mass. An elite male skier's position along the track during ski skating was simulated and compared with his experimental data. As input values in the model, air drag and friction were estimated from the literature based on the skier's mass, snow conditions, and speed. We regard the fit as good, since the difference in racing time between simulations and measurements was 2 seconds of the 815 seconds racing time, with acceptable fit both in uphill and downhill terrain. Using this model, we estimated the influence of changes in various factors such as air drag, friction, and body mass on performance. In conclusion, the power balance model with locomotive power as a function of speed was found to be a valid tool for analyzing performance in cross-country skiing.
A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions
NASA Astrophysics Data System (ADS)
Kim, T. K.; Arge, C. N.; Pogorelov, N. V.
2017-12-01
Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.
NASA Astrophysics Data System (ADS)
Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas
2016-03-01
One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.
MRAC Control with Prior Model Knowledge for Asymmetric Damaged Aircraft
Zhang, Jing
2015-01-01
This paper develops a novel state-tracking multivariable model reference adaptive control (MRAC) technique utilizing prior knowledge of plant models to recover control performance of an asymmetric structural damaged aircraft. A modification of linear model representation is given. With prior knowledge on structural damage, a polytope linear parameter varying (LPV) model is derived to cover all concerned damage conditions. An MRAC method is developed for the polytope model, of which the stability and asymptotic error convergence are theoretically proved. The proposed technique reduces the number of parameters to be adapted and thus decreases computational cost and requires less input information. The method is validated by simulations on NASA generic transport model (GTM) with damage. PMID:26180839
Processing Oscillatory Signals by Incoherent Feedforward Loops
Zhang, Carolyn; You, Lingchong
2016-01-01
From the timing of amoeba development to the maintenance of stem cell pluripotency, many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression. While the networks underlying this signal decoding are diverse, many are built around a common motif, the incoherent feedforward loop (IFFL), where an input simultaneously activates an output and an inhibitor of the output. With appropriate parameters, this motif can exhibit temporal adaptation, where the system is desensitized to a sustained input. This property serves as the foundation for distinguishing input signals with varying temporal profiles. Here, we use quantitative modeling to examine another property of IFFLs—the ability to process oscillatory signals. Our results indicate that the system’s ability to translate pulsatile dynamics is limited by two constraints. The kinetics of the IFFL components dictate the input range for which the network is able to decode pulsatile dynamics. In addition, a match between the network parameters and input signal characteristics is required for optimal “counting”. We elucidate one potential mechanism by which information processing occurs in natural networks, and our work has implications in the design of synthetic gene circuits for this purpose. PMID:27623175
Automated forward mechanical modeling of wrinkle ridges on Mars
NASA Astrophysics Data System (ADS)
Nahm, Amanda; Peterson, Samuel
2016-04-01
One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.
Whitbeck, David E.
2006-01-01
The Lamoreux Potential Evapotranspiration (LXPET) Program computes potential evapotranspiration (PET) using inputs from four different meteorological sources: temperature, dewpoint, wind speed, and solar radiation. PET and the same four meteorological inputs are used with precipitation data in the Hydrological Simulation Program-Fortran (HSPF) to simulate streamflow in the Salt Creek watershed, DuPage County, Illinois. Streamflows from HSPF are routed with the Full Equations (FEQ) model to determine water-surface elevations. Consequently, variations in meteorological inputs have potential to propagate through many calculations. Sensitivity of PET to variation was simulated by increasing the meteorological input values by 20, 40, and 60 percent and evaluating the change in the calculated PET. Increases in temperatures produced the greatest percent changes, followed by increases in solar radiation, dewpoint, and then wind speed. Additional sensitivity of PET was considered for shifts in input temperatures and dewpoints by absolute differences of ?10, ?20, and ?30 degrees Fahrenheit (degF). Again, changes in input temperatures produced the greatest differences in PET. Sensitivity of streamflow simulated by HSPF was evaluated for 20-percent increases in meteorological inputs. These simulations showed that increases in temperature produced the greatest change in flow. Finally, peak water-surface elevations for nine storm events were compared among unmodified meteorological inputs and inputs with values predicted 6, 24, and 48 hours preceding the simulated peak. Results of this study can be applied to determine how errors specific to a hydrologic system will affect computations of system streamflow and water-surface elevations.
NASA Astrophysics Data System (ADS)
Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2015-04-01
Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.
NASA Astrophysics Data System (ADS)
Fitton, N.; Datta, A.; Hastings, A.; Kuhnert, M.; Topp, C. F. E.; Cloy, J. M.; Rees, R. M.; Cardenas, L. M.; Williams, J. R.; Smith, K.; Chadwick, D.; Smith, P.
2014-09-01
The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N2O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N2O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N2O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions.
NASA Technical Reports Server (NTRS)
Moore, D. G. (Principal Investigator); Heilman, J.; Tunheim, J. A.; Baumberger, V.
1978-01-01
The author has identified the following significant results. To investigate the general relationship between surface temperature and soil moisture profiles, a series of model calculations were carried out. Soil temperature profiles were calculated during a complete diurnal cycle for a variety of moisture profiles. Preliminary results indicate the surface temperature difference between two sites measured at about 1400 hours is related to the difference in soil moisture within the diurnal damping depth (about 50 cm). The model shows this temperature difference to vary considerably throughout the diurnal cycle.
Functional data analysis for dynamical system identification of behavioral processes.
Trail, Jessica B; Collins, Linda M; Rivera, Daniel E; Li, Runze; Piper, Megan E; Baker, Timothy B
2014-06-01
Efficient new technology has made it straightforward for behavioral scientists to collect anywhere from several dozen to several thousand dense, repeated measurements on one or more time-varying variables. These intensive longitudinal data (ILD) are ideal for examining complex change over time but present new challenges that illustrate the need for more advanced analytic methods. For example, in ILD the temporal spacing of observations may be irregular, and individuals may be sampled at different times. Also, it is important to assess both how the outcome changes over time and the variation between participants' time-varying processes to make inferences about a particular intervention's effectiveness within the population of interest. The methods presented in this article integrate 2 innovative ILD analytic techniques: functional data analysis and dynamical systems modeling. An empirical application is presented using data from a smoking cessation clinical trial. Study participants provided 42 daily assessments of pre-quit and post-quit withdrawal symptoms. Regression splines were used to approximate smooth functions of craving and negative affect and to estimate the variables' derivatives for each participant. We then modeled the dynamics of nicotine craving using standard input-output dynamical systems models. These models provide a more detailed characterization of the post-quit craving process than do traditional longitudinal models, including information regarding the type, magnitude, and speed of the response to an input. The results, in conjunction with standard engineering control theory techniques, could potentially be used by tobacco researchers to develop a more effective smoking intervention. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Staudt, K.; Leifeld, J.; Bretscher, D.; Fuhrer, J.
2012-04-01
The Swiss inventory submission under the United Nations Framework Convention on Climate Change (UNFCCC) reports on changes in soil organic carbon stocks under different land-uses and land-use changes. The approach currently employed for cropland and grassland soils combines Tier 1 and Tier 2 methods and is considered overly simplistic. As the UNFCC encourages countries to develop Tier 3 methods for national greenhouse gas reporting, we aim to build up a model-based inventory of soil organic carbon in agricultural soils in Switzerland. We conducted a literature research on currently employed higher-tier methods using process-based models in four countries: Denmark, Sweden, Finland and the USA. The applied models stem from two major groups differing in complexity - those belonging to the group of general ecosystem models that include a plant-growth submodel, e.g. Century, and those that simulate soil organic matter turnover but not plant-growth, e.g. ICBM. For the latter group, carbon inputs to the soil from plant residues and roots have to be determined separately. We will present some aspects of the development of a model-based inventory of soil organic carbon in agricultural soils in Switzerland. Criteria for model evaluation are, among others, modeled land-use classes and land-use changes, spatial and temporal resolution, and coverage of relevant processes. For model parameterization and model evaluation at the field scale, data from several long-term agricultural experiments and monitoring sites in Switzerland is available. A subsequent regional application of a model requires the preparation of regional input data for the whole country - among others spatio-temporal meteorological data, agricultural and soil data. Following the evaluation of possible models and of available data, preference for application in the Swiss inventory will be given to simpler model structures, i.e. models without a plant-growth module. Thus, we compared different allometric relations for the estimation of plant carbon inputs to the soil from yield data that are usually provided with the models. Calculated above- and below-ground carbon inputs vary substantially between methods and exhibit different sensitivities to yield data. As a benchmark, inputs to the soil from above- and below-ground crop residues are calculated with the IPCC default method. Furthermore, the suitability of these estimation methods for Swiss conditions is tested.
Norman, Laura
2004-01-01
We have prepared a digital map of soil parameters for the international Ambos Nogales watershed to use as input for selected soils-erosion models. The Ambos Nogales watershed in southern Arizona and northern Sonora, Mexico, contains the Nogales wash, a tributary of the Upper Santa Cruz River. The watershed covers an area of 235 km2, just under half of which is in Mexico. Preliminary investigations of potential erosion revealed a discrepancy in soils data and mapping across the United States-Mexican border due to issues including different mapping resolutions, incompatible formatting, and varying nomenclature and classification systems. To prepare a digital soils map appropriate for input to a soils-erosion model, the historical analog soils maps for Nogales, Ariz., were scanned and merged with the larger-scale digital soils data available for Nogales, Sonora, Mexico using a geographic information system.
NASA Astrophysics Data System (ADS)
Jia, Xin-Hong
2006-12-01
The theoretical model on gain-clamped semiconductor optical amplifiers (GC-SOAs) based on compensating light has been constructed. Using this model, the effects of insertion position and peak reflectivity of the fiber Bragg grating (FBG) on the gain clamping and noise figure (NF) characteristics of GC-SOA are analyzed. The results show that the effect of the FBG insertion position on gain clamping is slight, but the lower NF can be obtained for input FBG-type GC-SOA; when the FBG peak wavelength is designed to close the signal wavelength, the gain clamping and NF characteristics that can be reached are better. Further study shows that, with the increased peak reflectivity of the FBG, the critical input power is broadened and the gain tends to be varied slowly; the larger bias current is helpful to raise gain and decrease the noise figure but is harmful to a gain flatness characteristic.
El-Houjeiri, Hassan M; Brandt, Adam R; Duffy, James E
2013-06-04
Existing transportation fuel cycle emissions models are either general and calculate nonspecific values of greenhouse gas (GHG) emissions from crude oil production, or are not available for public review and auditing. We have developed the Oil Production Greenhouse Gas Emissions Estimator (OPGEE) to provide open-source, transparent, rigorous GHG assessments for use in scientific assessment, regulatory processes, and analysis of GHG mitigation options by producers. OPGEE uses petroleum engineering fundamentals to model emissions from oil and gas production operations. We introduce OPGEE and explain the methods and assumptions used in its construction. We run OPGEE on a small set of fictional oil fields and explore model sensitivity to selected input parameters. Results show that upstream emissions from petroleum production operations can vary from 3 gCO2/MJ to over 30 gCO2/MJ using realistic ranges of input parameters. Significant drivers of emissions variation are steam injection rates, water handling requirements, and rates of flaring of associated gas.
NASA Technical Reports Server (NTRS)
Martino, J. P.; Lenz, R. C., Jr.; Chen, K. L.; Kahut, P.; Sekely, R.; Weiler, J.
1979-01-01
A cross impact model of the U.S. telecommunications system was developed. It was necessary to prepare forecasts of the major segments of the telecommunications system, such as satellites, telephone, TV, CATV, radio broadcasting, etc. In addition, forecasts were prepared of the traffic generated by a variety of new or expanded services, such as electronic check clearing and point of sale electronic funds transfer. Finally, the interactions among the forecasts were estimated (the cross impact). Both the forecasts and the cross impacts were used as inputs to the cross impact model, which could then be used to stimulate the future growth of the entire U.S. telecommunications system. By varying the inputs, technology changes or policy decisions with regard to any segment of the system could be evaluated in the context of the remainder of the system. To illustrate the operation of the model, a specific study was made of the deployment of fiber optics throughout the telecommunications system.
RTE: A computer code for Rocket Thermal Evaluation
NASA Technical Reports Server (NTRS)
Naraghi, Mohammad H. N.
1995-01-01
The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.
NASA Technical Reports Server (NTRS)
Schaefer, Jacob; Hanson, Curt; Johnson, Marcus A.; Nguyen, Nhan
2011-01-01
Three model reference adaptive controllers (MRAC) with varying levels of complexity were evaluated on a high performance jet aircraft and compared along with a baseline nonlinear dynamic inversion controller. The handling qualities and performance of the controllers were examined during failure conditions that induce coupling between the pitch and roll axes. Results from flight tests showed with a roll to pitch input coupling failure, the handling qualities went from Level 2 with the baseline controller to Level 1 with the most complex MRAC tested. A failure scenario with the left stabilator frozen also showed improvement with the MRAC. Improvement in performance and handling qualities was generally seen as complexity was incrementally added; however, added complexity usually corresponds to increased verification and validation effort required for certification. The tradeoff between complexity and performance is thus important to a controls system designer when implementing an adaptive controller on an aircraft. This paper investigates this relation through flight testing of several controllers of vary complexity.
Control of Turing patterns and their usage as sensors, memory arrays, and logic gates
NASA Astrophysics Data System (ADS)
Muzika, František; Schreiber, Igor
2013-10-01
We study a model system of three diffusively coupled reaction cells arranged in a linear array that display Turing patterns with special focus on the case of equal coupling strength for all components. As a suitable model reaction we consider a two-variable core model of glycolysis. Using numerical continuation and bifurcation techniques we analyze the dependence of the system's steady states on varying rate coefficient of the recycling step while the coupling coefficients of the inhibitor and activator are fixed and set at the ratios 100:1, 1:1, and 4:5. We show that stable Turing patterns occur at all three ratios but, as expected, spontaneous transition from the spatially uniform steady state to the spatially nonuniform Turing patterns occurs only in the first case. The other two cases possess multiple Turing patterns, which are stabilized by secondary bifurcations and coexist with stable uniform periodic oscillations. For the 1:1 ratio we examine modular spatiotemporal perturbations, which allow for controllable switching between the uniform oscillations and various Turing patterns. Such modular perturbations are then used to construct chemical computing devices utilizing the multiple Turing patterns. By classifying various responses we propose: (a) a single-input resettable sensor capable of reading certain value of concentration, (b) two-input and three-input memory arrays capable of storing logic information, (c) three-input, three-output logic gates performing combinations of logical functions OR, XOR, AND, and NAND.
Methods and systems for determining angular orientation of a drill string
Cobern, Martin E.
2010-03-23
Preferred methods and systems generate a control input based on a periodically-varying characteristic associated with the rotation of a drill string. The periodically varying characteristic can be correlated with the magnetic tool face and gravity tool face of a rotating component of the drill string, so that the control input can be used to initiate a response in the rotating component as a function of gravity tool face.
Pesavento, Michael J; Pinto, David J
2012-11-01
Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.
Lu, Ting; Wade, Kirstie; Sanchez, Jason Tait
2017-01-01
ABSTRACT We have previously shown that late-developing avian nucleus magnocellularis (NM) neurons (embryonic [E] days 19–21) fire action potentials (APs) that resembles a band-pass filter in response to sinusoidal current injections of varying frequencies. NM neurons located in the mid- to high-frequency regions of the nucleus fire preferentially at 75 Hz, but only fire a single onset AP to frequency inputs greater than 200 Hz. Surprisingly, NM neurons do not fire APs to sinusoidal inputs less than 20 Hz regardless of the strength of the current injection. In the present study we evaluated intrinsic mechanisms that prevent AP generation to low frequency inputs. We constructed a computational model to simulate the frequency-firing patterns of NM neurons based on experimental data at both room and near physiologic temperatures. The results from our model confirm that the interaction among low- and high-voltage activated potassium channels (KLVA and KHVA, respectively) and voltage dependent sodium channels (NaV) give rise to the frequency-firing patterns observed in vitro. In particular, we evaluated the regulatory role of KLVA during low frequency sinusoidal stimulation. The model shows that, in response to low frequency stimuli, activation of large KLVA current counterbalances the slow-depolarizing current injection, likely permitting NaV closed-state inactivation and preventing the generation of APs. When the KLVA current density was reduced, the model neuron fired multiple APs per sinusoidal cycle, indicating that KLVA channels regulate low frequency AP firing of NM neurons. This intrinsic property of NM neurons may assist in optimizing response to different rates of synaptic inputs. PMID:28481659
Abdelgaied, Abdellatif; Brockett, Claire L; Liu, Feng; Jennings, Louise M; Fisher, John; Jin, Zhongmin
2013-01-01
Polyethylene wear is a great concern in total joint replacement. It is now considered a major limiting factor to the long life of such prostheses. Cross-linking has been introduced to reduce the wear of ultra-high-molecular-weight polyethylene (UHMWPE). Computational models have been used extensively for wear prediction and optimization of artificial knee designs. However, in order to be independent and have general applicability and predictability, computational wear models should be based on inputs from independent experimentally determined wear parameters (wear factors or wear coefficients). The objective of this study was to investigate moderately cross-linked UHMWPE, using a multidirectional pin-on-plate wear test machine, under a wide range of applied nominal contact pressure (from 1 to 11 MPa) and under five different kinematic inputs, varying from a purely linear track to a maximum rotation of +/- 55 degrees. A computational model, based on a direct simulation of the multidirectional pin-on-plate wear tester, was developed to quantify the degree of cross-shear (CS) of the polyethylene pins articulating against the metallic plates. The moderately cross-linked UHMWPE showed wear factors less than half of that reported in the literature for, the conventional UHMWPE, under the same loading and kinematic inputs. In addition, under high applied nominal contact stress, the moderately crosslinked UHMWPE wear showed lower dependence on the degree of CS compared to that under low applied nominal contact stress. The calculated wear coefficients were found to be independent of the applied nominal contact stress, in contrast to the wear factors that were shown to be highly pressure dependent. This study provided independent wear data for inputs into computational models for moderately cross-linked polyethylene and supported the application of wear coefficient-based computational wear models.
NASA Astrophysics Data System (ADS)
Cardinael, Rémi; Guenet, Bertrand; Chevallier, Tiphaine; Dupraz, Christian; Cozzi, Thomas; Chenu, Claire
2018-01-01
Agroforestry is an increasingly popular farming system enabling agricultural diversification and providing several ecosystem services. In agroforestry systems, soil organic carbon (SOC) stocks are generally increased, but it is difficult to disentangle the different factors responsible for this storage. Organic carbon (OC) inputs to the soil may be larger, but SOC decomposition rates may be modified owing to microclimate, physical protection, or priming effect from roots, especially at depth. We used an 18-year-old silvoarable system associating hybrid walnut trees (Juglans regia × nigra) and durum wheat (Triticum turgidum L. subsp. durum) and an adjacent agricultural control plot to quantify all OC inputs to the soil - leaf litter, tree fine root senescence, crop residues, and tree row herbaceous vegetation - and measured SOC stocks down to 2 m of depth at varying distances from the trees. We then proposed a model that simulates SOC dynamics in agroforestry accounting for both the whole soil profile and the lateral spatial heterogeneity. The model was calibrated to the control plot only. Measured OC inputs to soil were increased by about 40 % (+ 1.11 t C ha-1 yr-1) down to 2 m of depth in the agroforestry plot compared to the control, resulting in an additional SOC stock of 6.3 t C ha-1 down to 1 m of depth. However, most of the SOC storage occurred in the first 30 cm of soil and in the tree rows. The model was strongly validated, properly describing the measured SOC stocks and distribution with depth in agroforestry tree rows and alleys. It showed that the increased inputs of fresh biomass to soil explained the observed additional SOC storage in the agroforestry plot. Moreover, only a priming effect variant of the model was able to capture the depth distribution of SOC stocks, suggesting the priming effect as a possible mechanism driving deep SOC dynamics. This result questions the potential of soils to store large amounts of carbon, especially at depth. Deep-rooted trees modify OC inputs to soil, a process that deserves further study given its potential effects on SOC dynamics.
Line-of-sight pointing accuracy/stability analysis and computer simulation for small spacecraft
NASA Astrophysics Data System (ADS)
Algrain, Marcelo C.; Powers, Richard M.
1996-06-01
This paper presents a case study where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. The simulation is implemented using XMATH/SystemBuild software from Integrated Systems, Inc. The paper is written in a tutorial manner and models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). THe predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are desired attitude angles and rate setpoints. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade-off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
NASA Astrophysics Data System (ADS)
Thomas, R. Q.; Zaehle, S.; Templer, P. H.; Goodale, C. L.
2011-12-01
Predictions of climate change depend on accurately modeling the feedbacks among the carbon cycle, nitrogen cycle, and climate system. Several global land surface models have shown that nitrogen limitation determines how land carbon fluxes respond to rising CO2, nitrogen deposition, and climate change, thereby influencing predictions of climate change. However, the magnitude of the carbon-nitrogen-climate feedbacks varies considerably by model, leading to critical and timely questions of why they differ and how they compare to field observations. To address these questions, we initiated a model inter-comparison of spatial patterns and drivers of nitrogen limitation. The experiment assessed the regional consequences of sustained nitrogen additions in a set of 25-year global nitrogen fertilization simulations. The model experiments were designed to cover effects from small changes in nitrogen inputs associated with plausible increases in nitrogen deposition to large changes associated with field-based nitrogen fertilization experiments. The analyses of model simulations included assessing the geographically varying degree of nitrogen limitation on plant and soil carbon cycling and the mechanisms underlying model differences. Here, we present results from two global land-surface models (CLM-CN and O-CN) with differing approaches to modeling carbon-nitrogen interactions. The predictions from each model were compared to a set of globally distributed observational data that includes nitrogen fertilization experiments, 15N tracer studies, small catchment nitrogen input-output studies, and syntheses across nitrogen deposition gradients. Together these datasets test many aspects of carbon-nitrogen coupling and are able to differentiate between the two models. Overall, this study is the first to explicitly benchmark carbon and nitrogen interactions in Earth System Models using a range of observations and is a foundation for future inter-comparisons.
NASA Astrophysics Data System (ADS)
Žabkar, Rahela; Koračin, Darko; Rakovec, Jože
2013-10-01
A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.
Contribution of Anal Sex to HIV Prevalence Among Heterosexuals: A Modeling Analysis.
O'Leary, Ann; DiNenno, Elizabeth; Honeycutt, Amanda; Allaire, Benjamin; Neuwahl, Simon; Hicks, Katherine; Sansom, Stephanie
2017-10-01
Anal intercourse is reported by many heterosexuals, and evidence suggests that its practice may be increasing. We estimated the proportion of the HIV burden attributable to anal sex in 2015 among heterosexual women and men in the United States. The HIV Optimization and Prevention Economics model was developed using parameter inputs from the literature for the sexually active U.S. population aged 13-64. The model uses differential equations to represent the progression of the population between compartments defined by HIV disease status and continuum-of-care stages from 2007 to 2015. For heterosexual women of all ages (who do not inject drugs), almost 28% of infections were associated with anal sex, whereas for women aged 18-34, nearly 40% of HIV infections were associated with anal sex. For heterosexual men, 20% of HIV infections were associated with insertive anal sex with women. Sensitivity analyses showed that varying any of 63 inputs by ±20% resulted in no more than a 13% change in the projected number of heterosexual infections in 2015, including those attributed to anal sex. Despite uncertainties in model inputs, a substantial portion of the HIV burden among heterosexuals appears to be attributable to anal sex. Providing information about the relative risk of anal sex compared with vaginal sex may help reduce HIV incidence in heterosexuals.
Simulating the Gradually Deteriorating Performance of an RTG
NASA Technical Reports Server (NTRS)
Wood, Eric G.; Ewell, Richard C.; Patel, Jagdish; Hanks, David R.; Lozano, Juan A.; Snyder, G. Jeffrey; Noon, Larry
2008-01-01
Degra (now in version 3) is a computer program that simulates the performance of a radioisotope thermoelectric generator (RTG) over its lifetime. Degra is provided with a graphical user interface that is used to edit input parameters that describe the initial state of the RTG and the time-varying loads and environment to which it will be exposed. Performance is computed by modeling the flows of heat from the radioactive source and through the thermocouples, also allowing for losses, to determine the temperature drop across the thermocouples. This temperature drop is used to determine the open-circuit voltage, electrical resistance, and thermal conductance of the thermocouples. Output power can then be computed by relating the open-circuit voltage and the electrical resistance of the thermocouples to a specified time-varying load voltage. Degra accounts for the gradual deterioration of performance attributable primarily to decay of the radioactive source and secondarily to gradual deterioration of the thermoelectric material. To provide guidance to an RTG designer, given a minimum of input, Degra computes the dimensions, masses, and thermal conductances of important internal structures as well as the overall external dimensions and total mass.
Shock Initiation Behavior of PBXN-9 Determined by Gas Gun Experiments
NASA Astrophysics Data System (ADS)
Sanchez, N. J.; Gustavsen, R. L.; Hooks, D. E.
2009-12-01
The shock to detonation transition was evaluated in the HMX based explosive PBXN-9 by a series of light-gas gun experiments. PBXN-9 consists of 92 wt% HMX, 2wt% Hycar 4054 & 6 wt&percent; dioctyl adipate with a density of 1.75 g/cm3 and 0.8&% voids. The experiments were designed to understand the specifics of wave evolution and the run distance to detonation as a function of input shock pressure. These experiments were conducted on gas guns in order to vary the input shock pressure accurately. The primary diagnostics were embedded magnetic gauges, which are based on Faraday's law of induction, and Photon Doppler Velocimetry (PDV). The run distance to detonation vs. shock pressure, or "Pop plot," was redefined as log(X) = 2.14-1.82 log (P), which is substantially different than previous data. The Hugoniot was refined as Us = 2.32+2.211 Up. This data will be useful for the development of predictive models for the safety and performance of PBXN-9 along with providing increased understanding of HMX based explosives in varying formulations.
Shock initiation behavior of PBXN-9 determined by gas gun experiments
NASA Astrophysics Data System (ADS)
Sanchez, Nathaniel; Gustavsen, Richard; Hooks, Daniel
2009-06-01
The shock to detonation transition was evaluated in the HMX based explosive PBXN-9 by a series of light-gas gun experiments. PBXN-9 consists of 92 wt% HMX, 2wt% Hycar 4054 & 6 wt% dioctyl adipate with a density of 1.75 g/cm^3 and 0.8% voids. The experiments were designed to understand the specifics of wave evolution and the run distance to detonation as a function of input shock pressure. These experiments were conducted on gas guns in order to vary the input shock pressure accurately. The primary diagnostics are embedded magnetic gauges which are based on Faraday's law of induction along with photon Doppler velocimetry (PDV). The run distance to detonation vs. shock pressure, or ``Pop plot,'' was redefined as log (X*) = 2.14 -- 1.82 log (P), which is substantially different than previous data. The Hugoniot was refined as Us = 2.32 + 2.21 Up. This data will be useful for the development of predictive models for the safety and performance of PBXN-9 in addition to providing an increased understanding of HMX based explosives in varying formulations.
Three-dimensional hysteresis compensation enhances accuracy of robotic artificial muscles
NASA Astrophysics Data System (ADS)
Zhang, Jun; Simeonov, Anthony; Yip, Michael C.
2018-03-01
Robotic artificial muscles are compliant and can generate straight contractions. They are increasingly popular as driving mechanisms for robotic systems. However, their strain and tension force often vary simultaneously under varying loads and inputs, resulting in three-dimensional hysteretic relationships. The three-dimensional hysteresis in robotic artificial muscles poses difficulties in estimating how they work and how to make them perform designed motions. This study proposes an approach to driving robotic artificial muscles to generate designed motions and forces by modeling and compensating for their three-dimensional hysteresis. The proposed scheme captures the nonlinearity by embedding two hysteresis models. The effectiveness of the model is confirmed by testing three popular robotic artificial muscles. Inverting the proposed model allows us to compensate for the hysteresis among temperature surrogate, contraction length, and tension force of a shape memory alloy (SMA) actuator. Feedforward control of an SMA-actuated robotic bicep is demonstrated. This study can be generalized to other robotic artificial muscles, thus enabling muscle-powered machines to generate desired motions.
NASA Astrophysics Data System (ADS)
Uijlenhoet, R.; Brauer, C.; Overeem, A.; Sassi, M.; Rios Gaona, M. F.
2014-12-01
Several rainfall measurement techniques are available for hydrological applications, each with its own spatial and temporal resolution. We investigated the effect of these spatiotemporal resolutions on discharge simulations in lowland catchments by forcing a novel rainfall-runoff model (WALRUS) with rainfall data from gauges, radars and microwave links. The hydrological model used for this analysis is the recently developed Wageningen Lowland Runoff Simulator (WALRUS). WALRUS is a rainfall-runoff model accounting for hydrological processes relevant to areas with shallow groundwater (e.g. groundwater-surface water feedback). Here, we used WALRUS for case studies in a freely draining lowland catchment and a polder with controlled water levels. We used rain gauge networks with automatic (hourly resolution but low spatial density) and manual gauges (high spatial density but daily resolution). Operational (real-time) and climatological (gauge-adjusted) C-band radar products and country-wide rainfall maps derived from microwave link data from a cellular telecommunication network were also used. Discharges simulated with these different inputs were compared to observations. We also investigated the effect of spatiotemporal resolution with a high-resolution X-band radar data set for catchments with different sizes. Uncertainty in rainfall forcing is a major source of uncertainty in discharge predictions, both with lumped and with distributed models. For lumped rainfall-runoff models, the main source of input uncertainty is associated with the way in which (effective) catchment-average rainfall is estimated. When catchments are divided into sub-catchments, rainfall spatial variability can become more important, especially during convective rainfall events, leading to spatially varying catchment wetness and spatially varying contribution of quick flow routes. Improving rainfall measurements and their spatiotemporal resolution can improve the performance of rainfall-runoff models, indicating their potential for reducing flood damage through real-time control.
Model parameter uncertainty analysis for an annual field-scale P loss model
NASA Astrophysics Data System (ADS)
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.
Providing pressure inputs to multizone building models
Herring, Steven J.; Batchelor, Simon; Bieringer, Paul E.; ...
2016-02-13
A study to assess how the fidelity of wind pressure inputs and indoor model complexity affect the predicted air change rate for a study building is presented. The purpose of the work is to support the development of a combined indoor-outdoor hazard prediction tool, which links the CONTAM multizone building simulation tool with outdoor dispersion models. The study building, representing a large office block of a simple rectangular geometry under natural ventilation, was based on a real building used in the Joint Urban 2003 experiment. A total of 1600 indoor model flow simulations were made, driven by 100 meteorological conditionsmore » which provided a wide range of building surface pressures. These pressures were applied at four levels of resolution to four different building configurations with varying numbers of internal zones and indoor and outdoor flow paths. Analysis of the results suggests that surface pressures and flow paths across the envelope should be specified at a resolution consistent with the dimensions of the smallest volume of interest, to ensure that appropriate outputs are obtained.« less
NASA Astrophysics Data System (ADS)
Wolfs, Vincent; Willems, Patrick
2013-10-01
Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.
Prediction of AL and Dst Indices from ACE Measurements Using Hybrid Physics/Black-Box Techniques
NASA Astrophysics Data System (ADS)
Spencer, E.; Rao, A.; Horton, W.; Mays, L.
2008-12-01
ACE measurements of the solar wind velocity, IMF and proton density is used to drive a hybrid Physics/Black- Box model of the nightside magnetosphere. The core physics is contained in a low order nonlinear dynamical model of the nightside magnetosphere called WINDMI. The model is augmented by wavelet based nonlinear mappings between the solar wind quantities and the input into the physics model, followed by further wavelet based mappings of the model output field aligned currents onto the ground based magnetometer measurements of the AL index and Dst index. The black box mappings are introduced at the input stage to account for uncertainties in the way the solar wind quantities are transported from the ACE spacecraft at L1 to the magnetopause. Similar mappings are introduced at the output stage to account for a spatially and temporally varying westward auroral electrojet geometry. The parameters of the model are tuned using a genetic algorithm, and trained using the large geomagnetic storm dataset of October 3-7 2000. It's predictive performance is then evaluated on subsequent storm datasets, in particular the April 15-24 2002 storm. This work is supported by grant NSF 7020201
NASA Astrophysics Data System (ADS)
Luscz, E.; Kendall, A. D.; Martin, S. L.; Hyndman, D. W.
2011-12-01
Watershed nutrient loading models are important tools used to address issues including eutrophication, harmful algal blooms, and decreases in aquatic species diversity. Such approaches have been developed to assess the level and source of nutrient loading across a wide range of scales, yet there is typically a tradeoff between the scale of the model and the level of detail regarding the individual sources of nutrients. To avoid this tradeoff, we developed a detailed source nutrient loading model for every watershed in Michigan's lower peninsula. Sources considered include atmospheric deposition, septic tanks, waste water treatment plants, combined sewer overflows, animal waste from confined animal feeding operations and pastured animals, as well as fertilizer from agricultural, residential, and commercial sources and industrial effluents . Each source is related to readily-available GIS inputs that may vary through time. This loading model was used to assess the importance of sources and landscape factors in nutrient loading rates to watersheds, and how these have changed in recent decades. The results showed the value of detailed source inputs, revealing regional trends while still providing insight to the existence of variability at smaller scales.
Water and solute mass balance of five small, relatively undisturbed watersheds in the U.S.
Peters, N.E.; Shanley, J.B.; Aulenbach, Brent T.; Webb, R.M.; Campbell, D.H.; Hunt, R.; Larsen, M.C.; Stallard, R.F.; Troester, J.; Walker, J.F.
2006-01-01
Geochemical mass balances were computed for water years 1992-1997 (October 1991 through September 1997) for the five watersheds of the U.S. Geological Survey Water, Energy, and Biogeochemical Budgets (WEBB) Program to determine the primary regional controls on yields of the major dissolved inorganic solutes. The sites, which vary markedly with respect to climate, geology, physiography, and ecology, are: Allequash Creek, Wisconsin (low-relief, humid continental forest); Andrews Creek, Colorado (cold alpine, taiga/tundra, and subalpine boreal forest); Ri??o Icacos, Puerto Rico (lower montane, wet tropical forest); Panola Mountain, Georgia (humid subtropical piedmont forest); and Sleepers River, Vermont (humid northern hardwood forest). Streamwater output fluxes were determined by constructing empirical multivariate concentration models including discharge and seasonal components. Input fluxes were computed from weekly wet-only or bulk precipitation sampling. Despite uncertainties in input fluxes arising from poorly defined elevation gradients, lack of dry-deposition and occult-deposition measurements, and uncertain sea-salt contributions, the following was concluded: (1) for solutes derived primarily from rock weathering (Ca, Mg, Na, K, and H4SiO4), net fluxes (outputs in streamflow minus inputs in deposition) varied by two orders of magnitude, which is attributed to a large gradient in rock weathering rates controlled by climate and geologic parent material; (2) the net flux of atmospherically derived solutes (NH4, NO3, SO4, and Cl) was similar among sites, with SO4 being the most variable and NH4 and NO3 generally retained (except for NO 3 at Andrews); and (3) relations among monthly solute fluxes and differences among solute concentration model parameters yielded additional insights into comparative biogeochemical processes at the sites. ?? 2005 Elsevier B.V. All rights reserved.
Dem Local Accuracy Patterns in Land-Use/Land-Cover Classification
NASA Astrophysics Data System (ADS)
Katerji, Wassim; Farjas Abadia, Mercedes; Morillo Balsera, Maria del Carmen
2016-01-01
Global and nation-wide DEM do not preserve the same height accuracy throughout the area of study. Instead of assuming a single RMSE value for the whole area, this study proposes a vario-model that divides the area into sub-regions depending on the land-use / landcover (LULC) classification, and assigns a local accuracy per each zone, as these areas share similar terrain formation and roughness, and tend to have similar DEM accuracies. A pilot study over Lebanon using the SRTM and ASTER DEMs, combined with a set of 1,105 randomly distributed ground control points (GCPs) showed that even though the inputDEMs have different spatial and temporal resolution, and were collected using difierent techniques, their accuracy varied similarly when changing over difierent LULC classes. Furthermore, validating the generated vario-models proved that they provide a closer representation of the accuracy to the validating GCPs than the conventional RMSE, by 94% and 86% for the SRTMand ASTER respectively. Geostatistical analysis of the input and output datasets showed that the results have a normal distribution, which support the generalization of the proven hypothesis, making this finding applicable to other input datasets anywhere around the world.
Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1999-01-01
The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.
LaBeau, Meredith B.; Mayer, Alex S.; Griffis, Veronica; Watkins, David Jr.; Robertson, Dale M.; Gyawali, Rabi
2015-01-01
In this work, we hypothesize that phosphorus (P) concentrations in streams vary seasonally and with streamflow and that it is important to incorporate this variation when predicting changes in P loading associated with climate change. Our study area includes 14 watersheds with a range of land uses throughout the U.S. Great Lakes Basin. We develop annual seasonal load-discharge regression models for each watershed and apply these models with simulated discharges generated for future climate scenarios to simulate future P loading patterns for two periods: 2046–2065 and 2081–2100. We utilize output from the Coupled Model Intercomparison Project phase 3 downscaled climate change projections that are input into the Large Basin Runoff Model to generate future discharge scenarios, which are in turn used as inputs to the seasonal P load regression models. In almost all cases, the seasonal load-discharge models match observed loads better than the annual models. Results using the seasonal models show that the concurrence of nonlinearity in the load-discharge model and changes in high discharges in the spring months leads to the most significant changes in P loading for selected tributaries under future climate projections. These results emphasize the importance of using seasonal models to understand the effects of future climate change on nutrient loads.
A simulation of cross-country skiing on varying terrain by using a mathematical power balance model
Moxnes, John F; Sandbakk, Øyvind; Hausken, Kjell
2013-01-01
The current study simulated cross-country skiing on varying terrain by using a power balance model. By applying the hypothetical inductive deductive method, we compared the simulated position along the track with actual skiing on snow, and calculated the theoretical effect of friction and air drag on skiing performance. As input values in the model, air drag and friction were estimated from the literature, whereas the model included relationships between heart rate, metabolic rate, and work rate based on the treadmill roller-ski testing of an elite cross-country skier. We verified this procedure by testing four models of metabolic rate against experimental data on the treadmill. The experimental data corresponded well with the simulations, with the best fit when work rate was increased on uphill and decreased on downhill terrain. The simulations predicted that skiing time increases by 3%–4% when either friction or air drag increases by 10%. In conclusion, the power balance model was found to be a useful tool for predicting how various factors influence racing performance in cross-country skiing. PMID:24379718
Modeling the effects of vegetation heterogeneity on wildland fire behavior
NASA Astrophysics Data System (ADS)
Atchley, A. L.; Linn, R.; Sieg, C.; Middleton, R. S.
2017-12-01
Vegetation structure and densities are known to drive fire-spread rate and burn severity. Many fire-spread models incorporate an average, homogenous fuel density in the model domain to drive fire behavior. However, vegetation communities are rarely homogenous and instead present significant heterogeneous structure and fuel densities in the fires path. This results in observed patches of varied burn severities and mosaics of disturbed conditions that affect ecological recovery and hydrologic response. Consequently, to understand the interactions of fire and ecosystem functions, representations of spatially heterogeneous conditions need to be incorporated into fire models. Mechanistic models of fire disturbance offer insight into how fuel load characterization and distribution result in varied fire behavior. Here we use a physically-based 3D combustion model—FIRETEC—that solves conservation of mass, momentum, energy, and chemical species to compare fire behavior on homogenous representations to a heterogeneous vegetation distribution. Results demonstrate the impact vegetation heterogeneity has on the spread rate, intensity, and extent of simulated wildfires thus providing valuable insight in predicted wildland fire evolution and enhanced ability to estimate wildland fire inputs into regional and global climate models.
A simulation of cross-country skiing on varying terrain by using a mathematical power balance model.
Moxnes, John F; Sandbakk, Oyvind; Hausken, Kjell
2013-01-01
The current study simulated cross-country skiing on varying terrain by using a power balance model. By applying the hypothetical inductive deductive method, we compared the simulated position along the track with actual skiing on snow, and calculated the theoretical effect of friction and air drag on skiing performance. As input values in the model, air drag and friction were estimated from the literature, whereas the model included relationships between heart rate, metabolic rate, and work rate based on the treadmill roller-ski testing of an elite cross-country skier. We verified this procedure by testing four models of metabolic rate against experimental data on the treadmill. The experimental data corresponded well with the simulations, with the best fit when work rate was increased on uphill and decreased on downhill terrain. The simulations predicted that skiing time increases by 3%-4% when either friction or air drag increases by 10%. In conclusion, the power balance model was found to be a useful tool for predicting how various factors influence racing performance in cross-country skiing.
Theta frequency background tunes transmission but not summation of spiking responses.
Parameshwaran, Dhanya; Bhalla, Upinder S
2013-01-01
Hippocampal neurons are known to fire as a function of frequency and phase of spontaneous network rhythms, associated with the animal's behaviour. This dependence is believed to give rise to precise rate and temporal codes. However, it is not well understood how these periodic membrane potential fluctuations affect the integration of synaptic inputs. Here we used sinusoidal current injection to the soma of CA1 pyramidal neurons in the rat brain slice to simulate background oscillations in the physiologically relevant theta and gamma frequency range. We used a detailed compartmental model to show that somatic current injection gave comparable results to more physiological synaptically driven theta rhythms incorporating excitatory input in the dendrites, and inhibitory input near the soma. We systematically varied the phase of synaptic inputs with respect to this background, and recorded changes in response and summation properties of CA1 neurons using whole-cell patch recordings. The response of the cell was dependent on both the phase of synaptic inputs and frequency of the background input. The probability of the cell spiking for a given synaptic input was up to 40% greater during the depolarized phases between 30-135 degrees of theta frequency current injection. Summation gain on the other hand, was not affected either by the background frequency or the phasic afferent inputs. This flat summation gain, coupled with the enhanced spiking probability during depolarized phases of the theta cycle, resulted in enhanced transmission of summed inputs during the same phase window of 30-135 degrees. Overall, our study suggests that although oscillations provide windows of opportunity to selectively boost transmission and EPSP size, summation of synaptic inputs remains unaffected during membrane oscillations.
Grustam, Andrija S; Vrijhoef, Hubertus J M; Koymans, Ron; Hukal, Philipp; Severens, Johan L
2017-10-11
The purpose of this study is to assess the Business-to-Consumer (B2C) model for telemonitoring patients with Chronic Heart Failure (CHF) by analysing the value it creates, both for organizations or ventures that provide telemonitoring services based on it, and for society. The business model assessment was based on the following categories: caveats, venture type, six-factor alignment, strategic market assessment, financial viability, valuation analysis, sustainability, societal impact, and technology assessment. The venture valuation was performed for three jurisdictions (countries) - Singapore, the Netherlands and the United States - in order to show the opportunities in a small, medium-sized, and large country (i.e. population). The business model assessment revealed that B2C telemonitoring is viable and profitable in the Innovating in Healthcare Framework. Analysis of the ecosystem revealed an average-to-excellent fit with the six factors. The structure and financing fit was average, public policy and technology alignment was good, while consumer alignment and accountability fit was deemed excellent. The financial prognosis revealed that the venture is viable and profitable in Singapore and the Netherlands but not in the United States due to relatively high salary inputs. The B2C model in telemonitoring CHF potentially creates value for patients, shareholders of the service provider, and society. However, the validity of the results could be improved, for instance by using a peer-reviewed framework, a systematic literature search, case-based cost/efficiency inputs, and varied scenario inputs.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Missing pulse detector for a variable frequency source
Ingram, Charles B.; Lawhorn, John H.
1979-01-01
A missing pulse detector is provided which has the capability of monitoring a varying frequency pulse source to detect the loss of a single pulse or total loss of signal from the source. A frequency-to-current converter is used to program the output pulse width of a variable period retriggerable one-shot to maintain a pulse width slightly longer than one-half the present monitored pulse period. The retriggerable one-shot is triggered at twice the input pulse rate by employing a frequency doubler circuit connected between the one-shot input and the variable frequency source being monitored. The one-shot remains in the triggered or unstable state under normal conditions even though the source period is varying. A loss of an input pulse or single period of a fluctuating signal input will cause the one-shot to revert to its stable state, changing the output signal level to indicate a missing pulse or signal.
Next-Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William; Fitzgerald, Matthew; Stahl, Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.
Roles for Coincidence Detection in Coding Amplitude-Modulated Sounds
Ashida, Go; Kretzberg, Jutta; Tollin, Daniel J.
2016-01-01
Many sensory neurons encode temporal information by detecting coincident arrivals of synaptic inputs. In the mammalian auditory brainstem, binaural neurons of the medial superior olive (MSO) are known to act as coincidence detectors, whereas in the lateral superior olive (LSO) roles of coincidence detection have remained unclear. LSO neurons receive excitatory and inhibitory inputs driven by ipsilateral and contralateral acoustic stimuli, respectively, and vary their output spike rates according to interaural level differences. In addition, LSO neurons are also sensitive to binaural phase differences of low-frequency tones and envelopes of amplitude-modulated (AM) sounds. Previous physiological recordings in vivo found considerable variations in monaural AM-tuning across neurons. To investigate the underlying mechanisms of the observed temporal tuning properties of LSO and their sources of variability, we used a simple coincidence counting model and examined how specific parameters of coincidence detection affect monaural and binaural AM coding. Spike rates and phase-locking of evoked excitatory and spontaneous inhibitory inputs had only minor effects on LSO output to monaural AM inputs. In contrast, the coincidence threshold of the model neuron affected both the overall spike rates and the half-peak positions of the AM-tuning curve, whereas the width of the coincidence window merely influenced the output spike rates. The duration of the refractory period affected only the low-frequency portion of the monaural AM-tuning curve. Unlike monaural AM coding, temporal factors, such as the coincidence window and the effective duration of inhibition, played a major role in determining the trough positions of simulated binaural phase-response curves. In addition, empirically-observed level-dependence of binaural phase-coding was reproduced in the framework of our minimalistic coincidence counting model. These modeling results suggest that coincidence detection of excitatory and inhibitory synaptic inputs is essential for LSO neurons to encode both monaural and binaural AM sounds. PMID:27322612
Estimating Fluctuating Pressures From Distorted Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1994-01-01
Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.
NASA Astrophysics Data System (ADS)
Cai, Xiushan; Meng, Lingxin; Zhang, Wei; Liu, Leipo
2018-03-01
We establish robustness of the predictor feedback control law to perturbations appearing at the system input for affine nonlinear systems with time-varying input delay and additive disturbances. Furthermore, it is shown that it is inverse optimal with respect to a differential game problem. All of the stability and inverse optimality proofs are based on the infinite-dimensional backstepping transformation and an appropriate Lyapunov functional. A single-link manipulator subject to input delays and disturbances is given to illustrate the validity of the proposed method.
Neuronal networks with NMDARs and lateral inhibition implement winner-takes-all
Shoemaker, Patrick A.
2015-01-01
A neural circuit that relies on the electrical properties of NMDA synaptic receptors is shown by numerical and theoretical analysis to be capable of realizing the winner-takes-all function, a powerful computational primitive that is often attributed to biological nervous systems. This biophysically-plausible model employs global lateral inhibition in a simple feedback arrangement. As its inputs increase, high-gain and then bi- or multi-stable equilibrium states may be assumed in which there is significant depolarization of a single neuron and hyperpolarization or very weak depolarization of other neurons in the network. The state of the winning neuron conveys analog information about its input. The winner-takes-all characteristic depends on the nonmonotonic current-voltage relation of NMDA receptor ion channels, as well as neural thresholding, and the gain and nature of the inhibitory feedback. Dynamical regimes vary with input strength. Fixed points may become unstable as the network enters a winner-takes-all regime, which can lead to entrained oscillations. Under some conditions, oscillatory behavior can be interpreted as winner-takes-all in nature. Stable winner-takes-all behavior is typically recovered as inputs increase further, but with still larger inputs, the winner-takes-all characteristic is ultimately lost. Network stability may be enhanced by biologically plausible mechanisms. PMID:25741276
Preserving information in neural transmission.
Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O
2009-05-13
Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.
Wind effect on salt transport variability in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Sandeep, K. K.; Pant, V.
2017-12-01
The Bay of Bengal (BoB) exhibits large spatial variability in sea surface salinity (SSS) pattern caused by its unique hydrological, meteorological and oceanographical characteristics. This SSS variability is largely controlled by the seasonally reversing monsoon winds and the associated currents. Further, the BoB receives substantial freshwater inputs through excess precipitation over evaporation and river discharge. Rivers like Ganges, Brahmaputra, Mahanadi, Krishna, Godavari, and Irawwady discharge annually a freshwater volume in range between 1.5 x 1012 and 1.83 x 1013 m3 into the bay. A major volume of this freshwater input to the bay occurs during the southwest monsoon (June-September) period. In the present study, a relative role of winds in the SSS variability in the bay is investigated by using an eddy-resolving three dimensional Regional Ocean Modeling System (ROMS) numerical model. The model is configured with realistic bathymetry, coastline of study region and forced with daily climatology of atmospheric variables. River discharges from the major rivers are distributed in the model grid points representing their respective geographic locations. Salt transport estimate from the model simulation for realistic case are compared with the standard reference datasets. Further, different experiments were carried out with idealized surface wind forcing representing the normal, low, high, and very high wind speed conditions in the bay while retaining the realistic daily varying directions for all the cases. The experimental simulations exhibit distinct dispersal patterns of the freshwater plume and SSS in different experiments in response to the idealized winds. Comparison of the meridional and zonal surface salt transport estimated for each experiment showed strong seasonality with varying magnitude in the bay with a maximum spatial and temporal variability in the western and northern parts of the BoB.
Prediction of problematic wine fermentations using artificial neural networks.
Román, R César; Hernández, O Gonzalo; Urtubia, U Alejandra
2011-11-01
Artificial neural networks (ANNs) have been used for the recognition of non-linear patterns, a characteristic of bioprocesses like wine production. In this work, ANNs were tested to predict problems of wine fermentation. A database of about 20,000 data from industrial fermentations of Cabernet Sauvignon and 33 variables was used. Two different ways of inputting data into the model were studied, by points and by fermentation. Additionally, different sub-cases were studied by varying the predictor variables (total sugar, alcohol, glycerol, density, organic acids and nitrogen compounds) and the time of fermentation (72, 96 and 256 h). The input of data by fermentations gave better results than the input of data by points. In fact, it was possible to predict 100% of normal and problematic fermentations using three predictor variables: sugars, density and alcohol at 72 h (3 days). Overall, ANNs were capable of obtaining 80% of prediction using only one predictor variable at 72 h; however, it is recommended to add more fermentations to confirm this promising result.
Laszlo, Sarah; Federmeier, Kara D.
2010-01-01
Linking print with meaning tends to be divided into subprocesses, such as recognition of an input's lexical entry and subsequent access of semantics. However, recent results suggest that the set of semantic features activated by an input is broader than implied by a view wherein access serially follows recognition. EEG was collected from participants who viewed items varying in number and frequency of both orthographic neighbors and lexical associates. Regression analysis of single item ERPs replicated past findings, showing that N400 amplitudes are greater for items with more neighbors, and further revealed that N400 amplitudes increase for items with more lexical associates and with higher frequency neighbors or associates. Together, the data suggest that in the N400 time window semantic features of items broadly related to inputs are active, consistent with models in which semantic access takes place in parallel with stimulus recognition. PMID:20624252
Hydrogen Financial Analysis Scenario Tool (H2FAST)
basic financial performance metrics change by varying up to 20 user inputs. Enter your own input values or adjust the slider bars to see how the results change. Please note that currency is expressed in
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Lacy, Joyce W.; Yassa, Michael A.; Stark, Shauna M.; Muftuler, L. Tugan; Stark, Craig E.L.
2011-01-01
Producing and maintaining distinct (orthogonal) neural representations for similar events is critical to avoiding interference in long-term memory. Recently, our laboratory provided the first evidence for separation-like signals in the human CA3/dentate. Here, we extended this by parametrically varying the change in input (similarity) while monitoring CA1 and CA3/dentate for separation and completion-like signals using high-resolution fMRI. In the CA1, activity varied in a graded fashion in response to increases in the change in input. In contrast, the CA3/dentate showed a stepwise transfer function that was highly sensitive to small changes in input. PMID:21164173
Rene, Eldon R.; López, M. Estefanía; Kim, Jung Hoon; Park, Hung Suck
2013-01-01
Lab scale studies were conducted to evaluate the performance of two simultaneously operated immobilized cell biofilters (ICBs) for removing hydrogen sulphide (H2S) and ammonia (NH3) from gas phase. The removal efficiencies (REs) of the biofilter treating H2S varied from 50 to 100% at inlet loading rates (ILRs) varying up to 13 g H2S/m3 ·h, while the NH3 biofilter showed REs ranging from 60 to 100% at ILRs varying between 0.5 and 5.5 g NH3/m3 ·h. An application of the back propagation neural network (BPNN) to predict the performance parameter, namely, RE (%) using this experimental data is presented in this paper. The input parameters to the network were unit flow (per min) and inlet concentrations (ppmv), respectively. The accuracy of BPNN-based model predictions were evaluated by providing the trained network topology with a test dataset and also by calculating the regression coefficient (R 2) values. The results from this predictive modeling work showed that BPNNs were able to predict the RE of both the ICBs efficiently. PMID:24307999
Numerical Analysis of the Heat Transfer Characteristics within an Evaporating Meniscus
NASA Astrophysics Data System (ADS)
Ball, Gregory
A numerical analysis was performed as to investigate the heat transfer characteristics of an evaporating thin-film meniscus. A mathematical model was used in the formulation of a third order ordinary differential equation. This equation governs the evaporating thin-film through use of continuity, momentum, energy equations and the Kelvin-Clapeyron model. This governing equation was treated as an initial value problem and was solved numerically using a Runge-Kutta technique. The numerical model uses varying thermophysical properties and boundary conditions such as channel width, applied superheat, accommodation coefficient and working fluid which can be tailored by the user. This work focused mainly on the effects of altering accommodation coefficient and applied superheat. A unified solution is also presented which models the meniscus to half channel width. The model was validated through comparison to literature values. In varying input values the following was determined; increasing superheat was found to shorten the film thickness and greatly increase the interfacial curvature overshoot values. The effect of decreasing accommodation coefficient lengthened the thin-film and retarded the evaporative effects.
A dynamic nitrogen budget model of a Pacific Northwest salt ...
The role of salt marshes as either nitrogen sinks or sources in relation to their adjacent estuaries has been a focus of ecosystem service research for many decades. The complex hydrology of these systems is driven by tides, upland surface runoff, precipitation, evapotranspiration, and groundwater inputs, all of which can vary significantly on timescales ranging from sub-daily to seasonal. Additionally, many of these hydrologic drivers may vary with a changing climate. Due to this temporal variation in hydrology, it is difficult to represent salt marsh nitrogen budgets as steady-state models. A dynamic nitrogen budget model that varies based on hydrologic conditions may more accurately describe the role of salt marshes in nitrogen cycling. In this study we aim to develop a hydrologic model that is coupled with a process-based nitrogen model to simulate nitrogen dynamics at multiple temporal scales. To construct and validate our model we will use hydrologic and nitrogen species data collected from 2010 to present, from a 1.8 hectare salt marsh in the Yaquina Estuary, OR, USA. Hydrologic data include water table levels at two transects, upland tributary flow, tidal channel stage and flow, and vertical hydraulic head gradients. Nitrogen pool data include concentrations of nitrate and ammonium in porewater, tidal channel water, and extracted from soil cores. Nitrogen flux data include denitrification rates, nitrogen concentrations in upland runoff, and tida
Sequential dynamics in visual short-term memory.
Kool, Wouter; Conway, Andrew R A; Turk-Browne, Nicholas B
2014-10-01
Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects.
NASA Astrophysics Data System (ADS)
Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu
2017-08-01
Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.
Sequential dynamics in visual short-term memory
Conway, Andrew R. A.; Turk-Browne, Nicholas B.
2014-01-01
Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects. PMID:25228092
Autonomous Planning and Replanning for Mine-Sweeping Unmanned Underwater Vehicles
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.
2010-01-01
This software generates high-quality plans for carrying out mine-sweeping activities under resource constraints. The autonomous planning and replanning system for unmanned underwater vehicles (UUVs) takes as input a set of prioritized mine-sweep regions, and a specification of available UUV resources including available battery energy, data storage, and time available for accomplishing the mission. Mine-sweep areas vary in location, size of area to be swept, and importance of the region. The planner also works with a model of the UUV, as well as a model of the power consumption of the vehicle when idle and when moving.
Robust preview control for a class of uncertain discrete-time systems with time-varying delay.
Li, Li; Liao, Fucheng
2018-02-01
This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Prasad, B. S. N.; Gayathri, H. B.; Muralikrishnan, N.
1992-01-01
Global UV-B flux (sum of direct and diffuse radiations) data at four wavelengths 280, 290, 300 and 310 nm are recorded at several locations in India as part of Indian Middle Atmosphere Programme (IMAP). The stations have been selected considering distinct geographic features and possible influence of atmospheric aerosols and particulates on the ground reaching UV-B flux. Mysore (12.6°N, 76.6°E) has been selected as a continental station largely free from any industrial pollution and large scale bio-mass burning. An examination of the ground reaching UV-B flux at Mysore shows a marked dirunal and seasonal asymmetry. This can be attributed to the seasonally varying atmospheric aerosols and particulates which influence the scattering of UV-B radiation. The available parameterization models are used to reproduce the experimental UV-B irradiance by varying the input parameters to the model. These results on the dirunal and seasonal variation of global UV-B flux from experiment and models are discussed in this paper.
Hepatic transporter drug-drug interactions: an evaluation of approaches and methodologies.
Williamson, Beth; Riley, Robert J
2017-12-01
Drug-drug interactions (DDIs) continue to account for 5% of hospital admissions and therefore remain a major regulatory concern. Effective, quantitative prediction of DDIs will reduce unexpected clinical findings and encourage projects to frontload DDI investigations rather than concentrating on risk management ('manage the baggage') later in drug development. A key challenge in DDI prediction is the discrepancies between reported models. Areas covered: The current synopsis focuses on four recent influential publications on hepatic drug transporter DDIs using static models that tackle interactions with individual transporters and in combination with other drug transporters and metabolising enzymes. These models vary in their assumptions (including input parameters), transparency, reproducibility and complexity. In this review, these facets are compared and contrasted with recommendations made as to their application. Expert opinion: Over the past decade, static models have evolved from simple [I]/k i models to incorporate victim and perpetrator disposition mechanisms including the absorption rate constant, the fraction of the drug metabolised/eliminated and/or clearance concepts. Nonetheless, models that comprise additional parameters and complexity do not necessarily out-perform simpler models with fewer inputs. Further, consideration of the property space to exploit some drug target classes has also highlighted the fine balance required between frontloading and back-loading studies to design out or 'manage the baggage'.
Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.
Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei
2018-06-01
This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.
Nonlinear tuning techniques of plasmonic nano-filters
NASA Astrophysics Data System (ADS)
Kotb, Rehab; Ismail, Yehea; Swillam, Mohamed A.
2015-02-01
In this paper, a fitting model to the propagation constant and the losses of Metal-Insulator-Metal (MIM) plasmonic waveguide is proposed. Using this model, the modal characteristics of MIM plasmonic waveguide can be solved directly without solving Maxwell's equations from scratch. As a consequence, the simulation time and the computational cost that are needed to predict the response of different plasmonic structures can be reduced significantly. This fitting model is used to develop a closed form model that describes the behavior of a plasmonic nano-filter. Easy and accurate mechanisms to tune the filter are investigated and analyzed. The filter tunability is based on using a nonlinear dielectric material with Pockels or Kerr effect. The tunability is achieved by applying an external voltage or through controlling the input light intensity. The proposed nano-filter supports both red and blue shift in the resonance response depending on the type of the used non-linear material. A new approach to control the input light intensity by applying an external voltage to a previous stage is investigated. Therefore, the filter tunability to a stage that has Kerr material can be achieved by applying voltage to a previous stage that has Pockels material. Using this method, the Kerr effect can be achieved electrically instead of varying the intensity of the input source. This technique enhances the ability of the device integration for on-chip applications. Tuning the resonance wavelength with high accuracy, minimum insertion loss and high quality factor is obtained using these approaches.
Liu, Spencer S; John, Raymond S
2010-01-01
Ultrasound guidance for regional anesthesia has increased in popularity. However, the cost of ultrasound versus nerve stimulator guidance is controversial, as multiple and varying cost inputs are involved. Sensitivity analysis allows modeling of different scenarios and determination of the relative importance of each cost input for a given scenario. We modeled cost per patient of ultrasound versus nerve stimulator using single-factor sensitivity analysis for 4 different clinical scenarios designed to span the expected financial impact of ultrasound guidance. The primary cost factors for ultrasound were revenue from billing for ultrasound (85% of variation in final cost), number of patients examined per ultrasound machine (10%), and block success rate (2.6%). In contrast, the most important input factors for nerve stimulator were the success rate of the nerve stimulator block (89%) and the amount of liability payout for failed airway due to rescue general anesthesia (9%). Depending on clinical scenario, ultrasound was either a profit or cost center. If revenue is generated, then ultrasound-guided blocks consistently become a profit center regardless of clinical scenario in our model. Without revenue, the clinical scenario dictates the cost of ultrasound. In an ambulatory setting, ultrasound is highly competitive with nerve stimulator and requires at least a 96% success rate with nerve stimulator before becoming more expensive. In a hospitalized scenario, ultrasound is consistently more expensive as the uniform use of general anesthesia and hospitalization negate any positive cost effects from greater efficiency with ultrasound.
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts
Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.
2016-01-01
Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ∼ 2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( < 0.063 mm) ash (3–59 %), atmospheric temperature, and water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.
Time-response shaping using output to input saturation transformation
NASA Astrophysics Data System (ADS)
Chambon, E.; Burlion, L.; Apkarian, P.
2018-03-01
For linear systems, the control law design is often performed so that the resulting closed loop meets specific frequency-domain requirements. However, in many cases, it may be observed that the obtained controller does not enforce time-domain requirements amongst which the objective of keeping a scalar output variable in a given interval. In this article, a transformation is proposed to convert prescribed bounds on an output variable into time-varying saturations on the synthesised linear scalar control law. This transformation uses some well-chosen time-varying coefficients so that the resulting time-varying saturation bounds do not overlap in the presence of disturbances. Using an anti-windup approach, it is obtained that the origin of the resulting closed loop is globally asymptotically stable and that the constrained output variable satisfies the time-domain constraints in the presence of an unknown finite-energy-bounded disturbance. An application to a linear ball and beam model is presented.
NASA Astrophysics Data System (ADS)
Gallagher, Kerry
2016-05-01
Flowers et al. (2015) propose a framework for reporting modeling results for thermochronological data problems, particularly when using inversion approaches. In the final paragraph, they state 'we hope that the suggested reporting table template will stimulate additional community discussion about modeling philosophies and reporting formats'. In this spirit the purpose of this comment is to suggest that they have underplayed the importance of presenting a comparison of the model predictions with the observations. An inversion-based modeling approach aims to identify those models which makes predictions consistent, perhaps to varying degrees, with the observed data. The concluding section includes the phrase 'clear documentation of the model inputs and outputs', but their example from the Grand Canyon shows only the observed data.
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
An overview of particulate emissions from residential biomass combustion
NASA Astrophysics Data System (ADS)
Vicente, E. D.; Alves, C. A.
2018-01-01
Residential biomass burning has been pointed out as one of the largest sources of fine particles in the global troposphere with serious impacts on air quality, climate and human health. Quantitative estimations of the contribution of this source to the atmospheric particulate matter levels are hard to obtain, because emission factors vary greatly with wood type, combustion equipment and operating conditions. Updated information should improve not only regional and global biomass burning emission inventories, but also the input for atmospheric models. In this work, an extensive tabulation of particulate matter emission factors obtained worldwide is presented and critically evaluated. Existing quantifications and the suitability of specific organic markers to assign the input of residential biomass combustion to the ambient carbonaceous aerosol are also discussed. Based on these organic markers or other tracers, estimates of the contribution of this sector to observed particulate levels by receptor models for different regions around the world are compiled. Key areas requiring future research are highlighted and briefly discussed.
Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind
2016-01-01
Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points. PMID:27212008
Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind
2016-05-23
Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points.
Assessing the performance of eight real-time updating models and procedures for the Brosna River
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.
2005-10-01
The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.
Patterns of new versus recycled primary production in the terrestrial biosphere
Cleveland, Cory C.; Houlton, Benjamin Z.; Smith, W. Kolby; Marklein, Alison R.; Reed, Sasha C.; Parton, William; Del Grosso, Stephen J.; Running, Steven W.
2013-01-01
Nitrogen (N) and phosphorus (P) availability regulate plant productivity throughout the terrestrial biosphere, influencing the patterns and magnitude of net primary production (NPP) by land plants both now and into the future. These nutrients enter ecosystems via geologic and atmospheric pathways and are recycled to varying degrees through the plant–soil–microbe system via organic matter decay processes. However, the proportion of global NPP that can be attributed to new nutrient inputs versus recycled nutrients is unresolved, as are the large-scale patterns of variation across terrestrial ecosystems. Here, we combined satellite imagery, biogeochemical modeling, and empirical observations to identify previously unrecognized patterns of new versus recycled nutrient (N and P) productivity on land. Our analysis points to tropical forests as a hotspot of new NPP fueled by new N (accounting for 45% of total new NPP globally), much higher than previous estimates from temperate and high-latitude regions. The large fraction of tropical forest NPP resulting from new N is driven by the high capacity for N fixation, although this varies considerably within this diverse biome; N deposition explains a much smaller proportion of new NPP. By contrast, the contribution of new N to primary productivity is lower outside the tropics, and worldwide, new P inputs are uniformly low relative to plant demands. These results imply that new N inputs have the greatest capacity to fuel additional NPP by terrestrial plants, whereas low P availability may ultimately constrain NPP across much of the terrestrial biosphere. PMID:23861492
High-Tech Versus High-Touch: Components of Hospital Costs Vary Widely.
Song, Paula H; Reiter, Kristin L; Yi Xu, Wendy
The recent release by the Centers for Medicare & Medicaid Services of hospital charge and payment data to the public has renewed a national dialogue on hospital costs and prices. However, to better understand the driving force of hospital pricing and to develop strategies for controlling expenditures, it is important to understand the underlying costs of providing hospital services. We use Medicare Provider and Analysis Review inpatient claims data and Medicare cost report data for fiscal years 2008 and 2012 to examine variations in the contribution of "high-tech" resources (i.e., technology/medical device-intensive resources) versus "high-touch" resources (i.e., labor-intensive resources) to the total costs of providing two common services, as well as assess how these costs have changed over time. We found that high-tech inputs accounted for a greater proportion of the total costs of surgical service, whereas medical service costs were primarily attributable to high-touch inputs. Although the total costs of services did not change significantly over time, the distribution of high-tech, high-touch, and other costs for each service varied considerably across hospitals. Understanding resource inputs and the varying contribution of these inputs by clinical condition is an important first step in developing effective cost control strategies.
Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays
Salt, Julián; Guinaldo, María; Chacón, Jesús
2018-01-01
In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n-input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant. PMID:29747441
Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays.
Aranda-Escolástico, Ernesto; Salt, Julián; Guinaldo, María; Chacón, Jesús; Dormido, Sebastián
2018-05-09
In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n -input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant.
Computer Simulations of Deltas with Varying Fluvial Input and Tidal Forcing
NASA Astrophysics Data System (ADS)
Sun, T.
2015-12-01
Deltas are important depositional systems because many large hydrocarbon reservoirs in the world today are found in delta deposits. Deltas form when water and sediments carried by fluvial channels are emptied to an open body of water, and form delta shaped deposits. Depending on the relative importance of the physical processes that controls the forming and the growth of deltas, deltas can often be classified into three different types, namely fluvial, tidal and wave dominated delta. Many previous works, using examples from modern systems, tank experiments, outcrops, and 2 and 3D seismic data sets, have studied the shape, morphology and stratigraphic architectures corresponding to each of the deltas' types. However, few studies have focused on the change of these properties as a function of the relative change of the key controls, and most of the studies are qualitative. Here, using computer simulations, the dynamics of delta evolutions under an increasing amount of tidal influences are studied. The computer model used is fully based on the physics of fluid flow and sediment transport. In the model, tidal influences are taken into account by setting proper boundary conditions that varies both temporally and spatially. The model is capable of capturing many important natural geomorphic and sedimentary processes in fluvial and tidal systems, such as channel initiation, formation of channel levees, growth of mouth bars, bifurcation of channels around channel mouth bars, and channel avulsion. By systematically varying tidal range and fluvial input, the following properties are investigated quantitatively: (1) the presence and the form of tidal beds as a function of tidal range, (2) change of stratigraphic architecture of distributary channel mouth bars or tidal bars as tidal range changes, (3) the transport and sorting of different grainsizes and the overall facie distributions in the delta with different tidal ranges, and (4) the conditions and locations of mud drapes with different magnitude of tidal forcing.
Influenza forecasting in human populations: a scoping review.
Chretien, Jean-Paul; George, Dylan; Shaman, Jeffrey; Chitale, Rohit A; McKenzie, F Ellis
2014-01-01
Forecasts of influenza activity in human populations could help guide key preparedness tasks. We conducted a scoping review to characterize these methodological approaches and identify research gaps. Adapting the PRISMA methodology for systematic reviews, we searched PubMed, CINAHL, Project Euclid, and Cochrane Database of Systematic Reviews for publications in English since January 1, 2000 using the terms "influenza AND (forecast* OR predict*)", excluding studies that did not validate forecasts against independent data or incorporate influenza-related surveillance data from the season or pandemic for which the forecasts were applied. We included 35 publications describing population-based (N = 27), medical facility-based (N = 4), and regional or global pandemic spread (N = 4) forecasts. They included areas of North America (N = 15), Europe (N = 14), and/or Asia-Pacific region (N = 4), or had global scope (N = 3). Forecasting models were statistical (N = 18) or epidemiological (N = 17). Five studies used data assimilation methods to update forecasts with new surveillance data. Models used virological (N = 14), syndromic (N = 13), meteorological (N = 6), internet search query (N = 4), and/or other surveillance data as inputs. Forecasting outcomes and validation metrics varied widely. Two studies compared distinct modeling approaches using common data, 2 assessed model calibration, and 1 systematically incorporated expert input. Of the 17 studies using epidemiological models, 8 included sensitivity analysis. This review suggests need for use of good practices in influenza forecasting (e.g., sensitivity analysis); direct comparisons of diverse approaches; assessment of model calibration; integration of subjective expert input; operational research in pilot, real-world applications; and improved mutual understanding among modelers and public health officials.
Influenza Forecasting in Human Populations: A Scoping Review
Chretien, Jean-Paul; George, Dylan; Shaman, Jeffrey; Chitale, Rohit A.; McKenzie, F. Ellis
2014-01-01
Forecasts of influenza activity in human populations could help guide key preparedness tasks. We conducted a scoping review to characterize these methodological approaches and identify research gaps. Adapting the PRISMA methodology for systematic reviews, we searched PubMed, CINAHL, Project Euclid, and Cochrane Database of Systematic Reviews for publications in English since January 1, 2000 using the terms “influenza AND (forecast* OR predict*)”, excluding studies that did not validate forecasts against independent data or incorporate influenza-related surveillance data from the season or pandemic for which the forecasts were applied. We included 35 publications describing population-based (N = 27), medical facility-based (N = 4), and regional or global pandemic spread (N = 4) forecasts. They included areas of North America (N = 15), Europe (N = 14), and/or Asia-Pacific region (N = 4), or had global scope (N = 3). Forecasting models were statistical (N = 18) or epidemiological (N = 17). Five studies used data assimilation methods to update forecasts with new surveillance data. Models used virological (N = 14), syndromic (N = 13), meteorological (N = 6), internet search query (N = 4), and/or other surveillance data as inputs. Forecasting outcomes and validation metrics varied widely. Two studies compared distinct modeling approaches using common data, 2 assessed model calibration, and 1 systematically incorporated expert input. Of the 17 studies using epidemiological models, 8 included sensitivity analysis. This review suggests need for use of good practices in influenza forecasting (e.g., sensitivity analysis); direct comparisons of diverse approaches; assessment of model calibration; integration of subjective expert input; operational research in pilot, real-world applications; and improved mutual understanding among modelers and public health officials. PMID:24714027
Effects of Varying Cloud Cover on Springtime Runoff in California's Sierra Nevada
NASA Astrophysics Data System (ADS)
Sumargo, E.; Cayan, D. R.
2017-12-01
This study investigates how cloud cover modifies snowmelt-runoff processes in Sierra Nevada watersheds during dry and wet periods. We use two of the California Department of Water Resources' (DWR's) quasi-operational models of the Tuolumne and Merced River basins developed from the USGS Precipitation-Runoff Modeling System (PRMS) hydrologic modeling system. Model simulations are conducted after a validated optimization of model performance in simulating recent (1996-2014) historical variability in the Tuolumne and Merced basins using solar radiation (Qsi) derived from Geostationary Operational Environmental Satellite (GOES) remote sensing. Specifically, the questions we address are: 1) how sensitive are snowmelt and runoff in the Tuolumne and Merced River basins to Qsi variability associated with cloud cover variations?, and 2) does this sensitivity change in dry vs. wet years? To address these question, we conduct two experiments, where: E1) theoretical clear-sky Qsi is used as an input to PRMS, and E2) the annual harmonic cycle of Qsi is used as an input to PRMS. The resulting hydrographs from these experiments exhibit changes in peak streamflow timing by several days to a few weeks and smaller streamflow variability when compared to the actual flows and the original simulations. For E1, despite some variations, this pattern persists when the result is evaluated for dry-year and wet-year subsets, reflecting the consistently higher Qsi input available. For E2, the hydrograph shows a later spring-summer streamflow peak in the dry-year subset when compared to the original simulations, indicating the relative importance of the modulating effect of cloud cover on snowmelt-runoff in drier years.
Crops Models for Varying Environmental Conditions
NASA Technical Reports Server (NTRS)
Jones, Harry; Cavazzoni, James; Keas, Paul
2001-01-01
New variable environment Modified Energy Cascade (MEC) crop models were developed for all the Advanced Life Support (ALS) candidate crops and implemented in SIMULINK. The MEC models are based on the Volk, Bugbee, and Wheeler Energy Cascade (EC) model and are derived from more recent Top-Level Energy Cascade (TLEC) models. The MEC models simulate crop plant responses to day-to-day changes in photosynthetic photon flux, photoperiod, carbon dioxide level, temperature, and relative humidity. The original EC model allows changes in light energy but uses a less accurate linear approximation. The simulation outputs of the new MEC models for constant nominal environmental conditions are very similar to those of earlier EC models that use parameters produced by the TLEC models. There are a few differences. The new MEC models allow setting the time for seed emergence, have realistic exponential canopy growth, and have corrected harvest dates for potato and tomato. The new MEC models indicate that the maximum edible biomass per meter squared per day is produced at the maximum allowed carbon dioxide level, the nominal temperatures, and the maximum light input. Reducing the carbon dioxide level from the maximum to the minimum allowed in the model reduces crop production significantly. Increasing temperature decreases production more than it decreases the time to harvest, so productivity in edible biomass per meter squared per day is greater at nominal than maximum temperatures, The productivity in edible biomass per meter squared per day is greatest at the maximum light energy input allowed in the model, but the edible biomass produced per light energy input unit is lower than at nominal light levels. Reducing light levels increases light and power use efficiency. The MEC models suggest we can adjust the light energy day-to- day to accommodate power shortages or Lise excess power while monitoring and controlling edible biomass production.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure
NASA Astrophysics Data System (ADS)
Ciandrini, L.; Maffi, C.; Motta, A.; Bassetti, B.; Cosentino Lagomarsino, M.
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter γ . We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying γ , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure.
Ciandrini, L; Maffi, C; Motta, A; Bassetti, B; Cosentino Lagomarsino, M
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter gamma. We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying gamma , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Synthetic Proxy Infrastructure for Task Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Pavel, Robert
The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less
Simulating the x-ray image contrast to setup techniques with desired flaw detectability
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2015-04-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
Neuscamman, Stephanie J.; Yu, Kristen L.
2016-05-01
The results of the National Atmospheric Release Advisory Center (NARAC) model simulations are compared to measured data from the Full-Scale Radiological Dispersal Device (FSRDD) field trials. The series of explosive radiological dispersal device (RDD) experiments was conducted in 2012 by Defence Research and Development Canada (DRDC) and collaborating organizations. During the trials, a wealth of data was collected, including a variety of deposition and air concentration measurements. The experiments were conducted with one of the stated goals being to provide measurements to atmospheric dispersion modelers. These measurements can be used to facilitate important model validation studies. For this study, meteorologicalmore » observations recorded during the tests are input to the diagnostic meteorological model, ADAPT, which provides 3–D, time-varying mean wind and turbulence fields to the LODI dispersion model. LODI concentration and deposition results are compared to the measured data, and the sensitivity of the model results to changes in input conditions (such as the particle activity size distribution of the source) and model physics (such as the rise of the buoyant cloud of explosive products) is explored. The NARAC simulations predicted the experimentally measured deposition results reasonably well considering the complexity of the release. Lastly, changes to the activity size distribution of the modeled particles can improve the agreement of the model results to measurement.« less
Encoding model of temporal processing in human visual cortex.
Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit
2017-12-19
How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.
Modeling influence of tide stages on forecasts of the 2010 Chilean tsunami
NASA Astrophysics Data System (ADS)
Uslu, B. U.; Chamberlin, C.; Walsh, D.; Eble, M. C.
2010-12-01
The impact of the 2010 Chilean tsunami is studied using the NOAA high-resolution tsunami forecast model augmented to include modeled tide heights in addition to deep-water tsunami propagation as boundary-condition input. The Chilean tsunami was observed at the Los Angeles tide station at mean low water, Hilo at low, Pago Pago at mid tide and Wake Island near high tide. Because the tsunami arrived at coastal communities at a representative variety of tide stages, 2010 Chile tsunami provides opportunity to study the tsunami impacts at different tide levels to different communities. The current forecast models are computed with a constant tidal stage, and this study evaluates techniques for adding an additional varying predicted tidal component in a forecasting context. Computed wave amplitudes, wave currents and flooding are compared at locations around the Pacific, and the difference in tsunami impact due to tidal stage is studied. This study focuses on how tsunami impacts vary with different tide levels, and helps us understand how the inclusion of tidal components can improve real-time forecast accuracy.
NASA Astrophysics Data System (ADS)
Zia, Asim; Bomblies, Arne; Schroth, Andrew W.; Koliba, Christopher; Isles, Peter D. F.; Tsai, Yushiou; Mohammed, Ibrahim N.; Bucini, Gabriela; Clemins, Patrick J.; Turnbull, Scott; Rodgers, Morgan; Hamed, Ahmed; Beckage, Brian; Winter, Jonathan; Adair, Carol; Galford, Gillian L.; Rizzo, Donna; Van Houten, Judith
2016-11-01
Global climate change (GCC) is projected to bring higher-intensity precipitation and higher-variability temperature regimes to the Northeastern United States. The interactive effects of GCC with anthropogenic land use and land cover changes (LULCCs) are unknown for watershed level hydrological dynamics and nutrient fluxes to freshwater lakes. Increased nutrient fluxes can promote harmful algal blooms, also exacerbated by warmer water temperatures due to GCC. To address the complex interactions of climate, land and humans, we developed a cascading integrated assessment model to test the impacts of GCC and LULCC on the hydrological regime, water temperature, water quality, bloom duration and severity through 2040 in transnational Lake Champlain’s Missisquoi Bay. Temperature and precipitation inputs were statistically downscaled from four global circulation models (GCMs) for three Representative Concentration Pathways. An agent-based model was used to generate four LULCC scenarios. Combined climate and LULCC scenarios drove a distributed hydrological model to estimate river discharge and nutrient input to the lake. Lake nutrient dynamics were simulated with a 3D hydrodynamic-biogeochemical model. We find accelerated GCC could drastically limit land management options to maintain water quality, but the nature and severity of this impact varies dramatically by GCM and GCC scenario.
An experimental approach to identify dynamical models of transcriptional regulation in living cells
NASA Astrophysics Data System (ADS)
Fiore, G.; Menolascina, F.; di Bernardo, M.; di Bernardo, D.
2013-06-01
We describe an innovative experimental approach, and a proof of principle investigation, for the application of System Identification techniques to derive quantitative dynamical models of transcriptional regulation in living cells. Specifically, we constructed an experimental platform for System Identification based on a microfluidic device, a time-lapse microscope, and a set of automated syringes all controlled by a computer. The platform allows delivering a time-varying concentration of any molecule of interest to the cells trapped in the microfluidics device (input) and real-time monitoring of a fluorescent reporter protein (output) at a high sampling rate. We tested this platform on the GAL1 promoter in the yeast Saccharomyces cerevisiae driving expression of a green fluorescent protein (Gfp) fused to the GAL1 gene. We demonstrated that the System Identification platform enables accurate measurements of the input (sugars concentrations in the medium) and output (Gfp fluorescence intensity) signals, thus making it possible to apply System Identification techniques to obtain a quantitative dynamical model of the promoter. We explored and compared linear and nonlinear model structures in order to select the most appropriate to derive a quantitative model of the promoter dynamics. Our platform can be used to quickly obtain quantitative models of eukaryotic promoters, currently a complex and time-consuming process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brinkman, Kyle; Bordia, Rajendra; Reifsnider, Kenneth
This project fabricated model multiphase ceramic waste forms with processing-controlled microstructures followed by advanced characterization with synchrotron and electron microscopy-based 3D tomography to provide elemental and chemical state-specific information resulting in compositional phase maps of ceramic composites. Details of 3D microstructural features were incorporated into computer-based simulations using durability data for individual constituent phases as inputs in order to predict the performance of multiphase waste forms with varying microstructure and phase connectivity.
NASA Technical Reports Server (NTRS)
Cho, Jeongho; Principe, Jose C.; Erdogmus, Deniz; Motter, Mark A.
2005-01-01
The next generation of aircraft will have dynamics that vary considerably over the operating regime. A single controller will have difficulty to meet the design specifications. In this paper, a SOM-based local linear modeling scheme of an unmanned aerial vehicle (UAV) is developed to design a set of inverse controllers. The SOM selects the operating regime depending only on the embedded output space information and avoids normalization of the input data. Each local linear model is associated with a linear controller, which is easy to design. Switching of the controllers is done synchronously with the active local linear model that tracks the different operating conditions. The proposed multiple modeling and control strategy has been successfully tested in a simulator that models the LoFLYTE UAV.
The intensity of knock in an internal combustion engine: An experimental and modeling study
NASA Astrophysics Data System (ADS)
Cowart, J. S.; Haghooie, M.; Newman, C. E.; Davis, G. C.; Pitz, W. J.; Westbrook, C. K.
1992-09-01
Experimental data have been obtained that characterize knock occurrence times and knock intensities in a spark ignition engine operating on indolene and 91 primary reference fuel, as spark timing and inlet temperature were varied. Individual, in-cylinder pressure histories measured under knocking conditions were conditioned and averaged to obtain representative pressure traces. These averaged pressure histories were used as input to a reduced and detailed chemical kinetic model. The time derivative of CO concentration and temperature were correlated with the measured knock intensity and percent cycles knocking. The goal was to evaluate the potential of using homogeneous, chemical kinetic models as predictive tools for knock intensity.
Kraus, Johanna M.; Pletcher, Leanna T.; Vonesh, James R.
2010-01-01
1. Cross-ecosystem movements of resources, including detritus, nutrients and living prey, can strongly influence food web dynamics in recipient habitats. Variation in resource inputs is thought to be driven by factors external to the recipient habitat (e.g. donor habitat productivity and boundary conditions). However, inputs of or by ‘active’ living resources may be strongly influenced by recipient habitat quality when organisms exhibit behavioural habitat selection when crossing ecosystem boundaries. 2. To examine whether behavioural responses to recipient habitat quality alter the relative inputs of ‘active’ living and ‘passive’ detrital resources to recipient food webs, we manipulated the presence of caged predatory fish and measured biomass, energy and organic content of inputs to outdoor experimental pools of adult aquatic insects, frog eggs, terrestrial plant matter and terrestrial arthropods. 3. Caged fish reduced the biomass, energy and organic matter donated to pools by tree frog eggs by ∼70%, but did not alter insect colonisation or passive allochthonous inputs of terrestrial arthropods and plant material. Terrestrial plant matter and adult aquatic insects provided the most energy and organic matter inputs to the pools (40–50%), while terrestrial arthropods provided the least (7%). Inputs of frog egg were relatively small but varied considerably among pools and over time (3%, range = 0–20%). Absolute and proportional amounts varied by input type. 4. Aquatic predators can strongly affect the magnitude of active, but not passive, inputs and that the effect of recipient habitat quality on active inputs is variable. Furthermore, some active inputs (i.e. aquatic insect colonists) can provide similar amounts of energy and organic matter as passive inputs of terrestrial plant matter, which are well known to be important. Because inputs differ in quality and the trophic level they subsidise, proportional changes in input type could have strong effects on recipient food webs. 5. Cross-ecosystem resource inputs have previously been characterised as donor-controlled. However, control by the recipient food web could lead to greater feedback between resource flow and consumer dynamics than has been appreciated so far.
Advanced ion thruster research
NASA Technical Reports Server (NTRS)
Wilbur, P. J.
1984-01-01
A simple model describing the discharge chamber performance of high strength, cusped magnetic field ion thrusters is developed. The model is formulated in terms of the energy cost of producing ions in the discharge chamber and the fraction of ions produced in the discharge chamber that are extracted to form the ion beam. The accuracy of the model is verified experimentally in a series of tests wherein the discharge voltage, propellant, grid transparency to neutral atoms, beam diameter and discharge chamber wall temperature are varied. The model is exercised to demonstrate what variations in performance might be expected by varying discharge chamber parameters. The results of a study of xenon and argon orificed hollow cathodes are reported. These results suggest that a hollow cathode model developed from research conducted on mercury cathodes can also be applied to xenon and argon. Primary electron mean free paths observed in argon and xenon cathodes that are larger than those found in mercury cathodes are identified as a cause of performance differences between mercury and inert gas cathodes. Data required as inputs to the inert gas cathode model are presented so it can be used as an aid in cathode design.
A potato model intercomparison across varying climates and productivity levels.
Fleisher, David H; Condori, Bruno; Quiroz, Roberto; Alva, Ashok; Asseng, Senthold; Barreda, Carolina; Bindi, Marco; Boote, Kenneth J; Ferrise, Roberto; Franke, Angelinus C; Govindakrishnan, Panamanna M; Harahagazwe, Dieudonne; Hoogenboom, Gerrit; Naresh Kumar, Soora; Merante, Paolo; Nendel, Claas; Olesen, Jorgen E; Parker, Phillip S; Raes, Dirk; Raymundo, Rubi; Ruane, Alex C; Stockle, Claudio; Supit, Iwan; Vanuytrecht, Eline; Wolf, Joost; Woli, Prem
2017-03-01
A potato crop multimodel assessment was conducted to quantify variation among models and evaluate responses to climate change. Nine modeling groups simulated agronomic and climatic responses at low-input (Chinoli, Bolivia and Gisozi, Burundi)- and high-input (Jyndevad, Denmark and Washington, United States) management sites. Two calibration stages were explored, partial (P1), where experimental dry matter data were not provided, and full (P2). The median model ensemble response outperformed any single model in terms of replicating observed yield across all locations. Uncertainty in simulated yield decreased from 38% to 20% between P1 and P2. Model uncertainty increased with interannual variability, and predictions for all agronomic variables were significantly different from one model to another (P < 0.001). Uncertainty averaged 15% higher for low- vs. high-input sites, with larger differences observed for evapotranspiration (ET), nitrogen uptake, and water use efficiency as compared to dry matter. A minimum of five partial, or three full, calibrated models was required for an ensemble approach to keep variability below that of common field variation. Model variation was not influenced by change in carbon dioxide (C), but increased as much as 41% and 23% for yield and ET, respectively, as temperature (T) or rainfall (W) moved away from historical levels. Increases in T accounted for the highest amount of uncertainty, suggesting that methods and parameters for T sensitivity represent a considerable unknown among models. Using median model ensemble values, yield increased on average 6% per 100-ppm C, declined 4.6% per °C, and declined 2% for every 10% decrease in rainfall (for nonirrigated sites). Differences in predictions due to model representation of light utilization were significant (P < 0.01). These are the first reported results quantifying uncertainty for tuber/root crops and suggest modeling assessments of climate change impact on potato may be improved using an ensemble approach. © 2016 John Wiley & Sons Ltd.
A design philosophy for multi-layer neural networks with applications to robot control
NASA Technical Reports Server (NTRS)
Vadiee, Nader; Jamshidi, MO
1989-01-01
A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.
Propagation of hypergeometric Gaussian beams in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Tang, Bin; Bian, Lirong; Zhou, Xin; Chen, Kai
2018-01-01
Optical vortex beams have attracted lots of interest due to its potential application in image processing, optical trapping and optical communications, etc. In this work, we theoretically and numerically investigated the propagation properties of hypergeometric Gaussian (HyGG) beams in strongly nonlocal nonlinear media. Based on the Snyder-Mitchell model, analytical expressions for propagation of the HyGG beams in strongly nonlocal nonlinear media were obtained. The influence of input power and optical parameters on the evolutions of the beam width and radius of curvature is illustrated, respectively. The results show that the beam width and radius of curvature of the HyGG beams remain invariant, like a soliton when the input power is equal to the critical power. Otherwise, it varies periodically like a breather, which is the result of competition between the beam diffraction and nonlinearity of the medium.
NASA Astrophysics Data System (ADS)
Rossi, A.; Montefoschi, F.; Rizzo, A.; Diligenti, M.; Festucci, C.
2017-10-01
Machine Learning applied to Automatic Audio Surveillance has been attracting increasing attention in recent years. In spite of several investigations based on a large number of different approaches, little attention had been paid to the environmental temporal evolution of the input signal. In this work, we propose an exploration in this direction comparing the temporal correlations extracted at the feature level with the one learned by a representational structure. To this aim we analysed the prediction performances of a Recurrent Neural Network architecture varying the length of the processed input sequence and the size of the time window used in the feature extraction. Results corroborated the hypothesis that sequential models work better when dealing with data characterized by temporal order. However, so far the optimization of the temporal dimension remains an open issue.
A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments
NASA Astrophysics Data System (ADS)
Quigley, Patricia Allison
Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.
A Cascade Approach to Uncertainty Estimation for the Hydrological Simulation of Droughts
NASA Astrophysics Data System (ADS)
Smith, Katie; Tanguy, Maliko; Parry, Simon; Prudhomme, Christel
2016-04-01
Uncertainty poses a significant challenge in environmental research and the characterisation and quantification of uncertainty has become a research priority over the past decade. Studies of extreme events are particularly affected by issues of uncertainty. This study focusses on the sources of uncertainty in the modelling of streamflow droughts in the United Kingdom. Droughts are a poorly understood natural hazard with no universally accepted definition. Meteorological, hydrological and agricultural droughts have different meanings and vary both spatially and temporally, yet each is inextricably linked. The work presented here is part of two extensive interdisciplinary projects investigating drought reconstruction and drought forecasting capabilities in the UK. Lumped catchment models are applied to simulate streamflow drought, and uncertainties from 5 different sources are investigated: climate input data, potential evapotranspiration (PET) method, hydrological model, within model structure, and model parameterisation. Latin Hypercube sampling is applied to develop large parameter ensembles for each model structure which are run using parallel computing on a high performance computer cluster. Parameterisations are assessed using a multi-objective evaluation criteria which includes both general and drought performance metrics. The effect of different climate input data and PET methods on model output is then considered using the accepted model parameterisations. The uncertainty from each of the sources creates a cascade, and when presented as such the relative importance of each aspect of uncertainty can be determined.
Interacting with an artificial partner: modeling the role of emotional aspects.
Cattinelli, Isabella; Goldwurm, Massimiliano; Borghese, N Alberto
2008-12-01
In this paper we introduce a simple model based on probabilistic finite state automata to describe an emotional interaction between a robot and a human user, or between simulated agents. Based on the agent's personality, attitude, and nature, and on the emotional inputs it receives, the model will determine the next emotional state displayed by the agent itself. The probabilistic and time-varying nature of the model yields rich and dynamic interactions, and an autonomous adaptation to the interlocutor. In addition, a reinforcement learning technique is applied to have one agent drive its partner's behavior toward desired states. The model may also be used as a tool for behavior analysis, by extracting high probability patterns of interaction and by resorting to the ergodic properties of Markov chains.
Evaluation of INL Supplied MOOSE/OSPREY Model: Modeling Water Adsorption on Type 3A Molecular Sieve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pompilio, L. M.; DePaoli, D. W.; Spencer, B. B.
The purpose of this study was to evaluate Idaho National Lab’s Multiphysics Object-Oriented Simulation Environment (MOOSE) software in modeling the adsorption of water onto type 3A molecular sieve (3AMS). MOOSE can be thought-of as a computing framework within which applications modeling specific coupled-phenomena can be developed and run. The application titled Off-gas SeParation and REcoverY (OSPREY) has been developed to model gas sorption in packed columns. The sorbate breakthrough curve calculated by MOOSE/OSPREY was compared to results previously obtained in the deep bed hydration tests conducted at Oak Ridge National Laboratory. The coding framework permits selection of various options, whenmore » they exist, for modeling a process. For example, the OSPREY module includes options to model the adsorption equilibrium with a Langmuir model or a generalized statistical thermodynamic adsorption (GSTA) model. The vapor solid equilibria and the operating conditions of the process (e.g., gas phase concentration) are required to calculate the concentration gradient driving the mass transfer between phases. Both the Langmuir and GSTA models were tested in this evaluation. Input variables were either known from experimental conditions, or were available (e.g., density) or were estimated (e.g., thermal conductivity of sorbent) from the literature. Variables were considered independent of time, i.e., rather than having a mass transfer coefficient that varied with time or position in the bed, the parameter was set to remain constant. The calculated results did not coincide with data from laboratory tests. The model accurately estimated the number of bed volumes processed for the given operating parameters, but breakthrough times were not accurately predicted, varying 50% or more from the data. The shape of the breakthrough curves also differed from the experimental data, indicating a much wider sorption band. Model modifications are needed to improve its utility and predictive capability. Recommended improvements include: greater flexibility for input of mass transfer parameters, time-variable gas inlet concentration, direct output of loading and temperature profiles along the bed, and capability to conduct simulations of beds in series.« less
NASA Astrophysics Data System (ADS)
Müller Schmied, Hannes; Döll, Petra
2017-04-01
The estimation of the World's water resources has a long tradition and numerous methods for quantification exists. The resulting numbers vary significantly, leaving room for improvement. Since some decades, global hydrological models (GHMs) are being used for large scale water budget assessments. GHMs are designed to represent the macro-scale hydrological processes and many of those models include human water management, e.g. irrigation or reservoir operation, making them currently the first choice for global scale assessments of the terrestrial water balance within the Anthropocene. The Water - Global Assessment and Prognosis (WaterGAP) is a model framework that comprises both the natural and human water dimension and is in development and application since the 1990s. In recent years, efforts were made to assess the sensitivity of water balance components to alternative climate forcing input data and, e.g., how this sensitivity is affected by WaterGAP's calibration scheme. This presentation shows the current best estimate of terrestrial water balance components as simulated with WaterGAP by 1) assessing global and continental water balance components for the climate period 1971-2000 and the IPCC reference period 1986-2005 for the most current WaterGAP version using a homogenized climate forcing data, 2) investigating variations of water balance components for a number of state-of-the-art climate forcing data and 3) discussing the benefit of the calibration approach for a better observation-data constrained global water budget. For the most current WaterGAP version 2.2b and a homogenized combination of the two WATCH Forcing Datasets, global scale (excluding Antarctica and Greenland) river discharge into oceans and inland sinks (Q) is assessed to be 40 000 km3 yr-1 for 1971-2000 and 39 200 km3 yr-1 for 1986-2005. Actual evapotranspiration (AET) is close to each other with around 70 600 (70 700) km3 yr-1 as well as water consumption with 1000 (1100) km3 yr-1. The main reason for differing Q is varying precipitation (P, 111 600 km3 yr-1 vs. 110 900 km3 yr-1). The sensitivity of water balance components to alternative climate forcing data is high. Applying 5 state-of-the-art climate forcing data sets, long term average P differs globally by 8000 km3 yr-1, mainly due to different handling of precipitation undercatch correction (or neglecting it). AET differs by 5500 km3 yr-1 whereas Q varies by 3000 km3 yr-1. The sensitivity of human water consumption to alternative climate input data is only about 5%. WaterGAP's calibration approach forces simulated long-term river discharge to be approximately equal to observed values at 1319 gauging stations during the time period selected for calibration. This scheme greatly reduces the impact of uncertain climate input on simulated Q data in these upstream drainage basins (as well as downstream). In calibration areas, the Q variation among the climate input data is much lower (1.6%) than in non-calibrated areas (18.5%). However, variation of Q at the grid cell-level is still high (an average of 37% for Q in grid cells in calibration areas vs. 74% outside). Due to the closed water balance, variation of AET is higher in calibrated areas than in non-calibrated areas. Main challenges in assessing the world's water resources by GHMs like WaterGAP are 1) the need of consistent long-term climate forcing input data sets, especial considering a suitable handling of P undercatch, 2) the accessibility of in-situ data for river discharge or alternative calibration data for currently non-calibrated areas, and 3) an improved simulation in semi-arid and arid river basins. As an outlook, a multi-model, multi-forcing study of global water balance components within the frame of the Inter-Sectoral Impact Model Intercomparison Project is proposed.
Code of Federal Regulations, 2014 CFR
2014-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2013 CFR
2013-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2012 CFR
2012-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2011 CFR
2011-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.; ...
2015-03-27
Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use formore » the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability.« less
Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.; Brodsky, Nancy S.; Brown, Theresa J.; Choiniere, Conrad J.; Coleman, Blair N.; Paredes, Antonio; Apelberg, Benjamin J.
2015-01-01
Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability. PMID:25815840
Vugrin, Eric D; Rostron, Brian L; Verzi, Stephen J; Brodsky, Nancy S; Brown, Theresa J; Choiniere, Conrad J; Coleman, Blair N; Paredes, Antonio; Apelberg, Benjamin J
2015-01-01
Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.
Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use formore » the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability.« less
Nonlinear dynamic characteristics of dielectric elastomer membranes
NASA Astrophysics Data System (ADS)
Fox, Jason W.; Goulbourne, Nakhiah C.
2008-03-01
The dynamic response of dielectric elastomer membranes subject to time-varying voltage inputs for various initial inflation states is investigated. These results provide new insight into the differences observed between quasi-static and dynamic actuation and presents a new challenge to modeling efforts. Dielectric elastomer membranes are a potentially enabling technology for soft robotics and biomedical devices such as implants and surgical tools. In this work, two key system parameters are varied: the chamber volume and the voltage signal offset. The chamber volume experiments reveal that increasing the size of the chamber onto which the membrane is clamped will increase the deformations as well as cause the membrane's resonance peaks to shift and change in number. For prestretched dielectric elastomer membranes at the smallest chamber volume, the maximum actuation displacement is 81 microns; while at the largest chamber volume, the maximum actuation displacement is 1431 microns. This corresponds to a 1767% increase in maximum pole displacement. In addition, actuating the membrane at the resonance frequencies provides hundreds of percent increase in strain compared to the quasi-static strain. Adding a voltage offset to the time-varying input signal causes the membrane to oscillate at two distinct frequencies rather than one and also presents a unique opportunity to increase the output displacement without electrically overloading the membrane. Experiments to capture the entire motion of the membrane reveal that classical membrane mode shapes are electrically generated although all points of the membrane do not pass through equilibrium at the same moments in time.
NASA Astrophysics Data System (ADS)
Balasubramanian, S.; Koloutsou-Vakakis, S.; Rood, M. J.
2014-12-01
Improving modeling predictions of atmospheric particulate matter and deposition of reactive nitrogen requires representative emission inventories of precursor species, such as ammonia (NH3). Anthropogenic NH3 is primarily emitted to the atmosphere from agricultural sources (80-90%) with dominant contributions (56%) from chemical fertilizer usage (CFU) in regions like Midwest USA. Local crop management practices vary spatially and temporally, which influence regional air quality. To model the impact of CFU, NH3 emission inputs to chemical transport models are obtained from the National Emission Inventory (NEI). NH3 emissions from CFU are typically estimated by combining annual fertilizer sales data with emission factors. The Sparse Matrix Operator Kernel Emissions (SMOKE) model is used to disaggregate annual emissions to hourly scale using temporal factors. These factors are estimated by apportioning emissions within each crop season in proportion to the nitrogen applied and time-averaged to the hourly scale. Such approach does not reflect influence of CFU for different crops and local weather and soil conditions. This study provides an alternate approach for estimating temporal factors for NH3 emissions. The DeNitrification DeComposition (DNDC) model was used to estimate daily variations in NH3 emissions from CFU at 14 Central Illinois locations for 2002-2011. Weather, crop and soil data were provided as inputs. A method was developed to estimate site level CFU by combining planting and harvesting dates, nitrogen management and fertilizer sales data. DNDC results indicated that annual NH3 emissions were within ±15% of SMOKE estimates. Daily modeled emissions across 10 years followed similar distributions but varied in magnitudes within ±20%. Individual emission peaks on days after CFU were 2.5-8 times greater as compared to existing estimates from SMOKE. By identifying the episodic nature of NH3 emissions from CFU, this study is expected to provide improvements in predicting atmospheric particulate matter concentrations and deposition of reactive nitrogen.
NASA Astrophysics Data System (ADS)
Kim, S.; Arii, M.; Jackson, T. J.
2017-12-01
L-band airborne synthetic aperture radar (SAR) observations at 7-m spatial resolution were made over California shrublands to better understand the effects of soil and vegetation parameters on backscattering coefficient (σ0). Temporal changes in σ0 of up to 3 dB were highly correlated to surface soil moisture but not to vegetation, even though vegetation water content (VWC) varied seasonally by a factor of two. HH was always greater than VV, suggesting the importance of double-bounce scattering by the woody parts. However, the geometric and dielectric properties of the woody parts did not vary significantly over time. Instead the changes in VWC occurred primarily in thin leaves that may not meaningfully influence absorption and scattering. A physically-based model for single scattering by discrete elements of plants successfully simulated the magnitude of the temporal variations in HH, VV, and HH/VV with a difference of less than 0.9 dB. In order to simulate the observations, the VWC input of the plant to the model was formulated as a function of plant's dielectric property (water fraction) while the plant geometry remains static in time. In comparison, when the VWC input was characterized by the geometry of a growing plant, the model performed poorly in describing the observed patterns in the σ0 changes. The modeling results offer explanation of the observation that soil moisture correlated highly with σ0: the dominant mechanisms for HH and VV are double-bounce scattering by trunk, and soil surface scattering, respectively. The time-series inversion of the physical model was able to retrieve soil moisture with the difference of -0.037 m3/m3 (mean), 0.025 m3/m3 (standard deviation), and 0.89 (correlation). Together with the previous results over croplands using the SAR data offering 0.05 m3/m3 retrieval accuracy, we will demonstrate the efficacy of the model-based time-series soil moisture retrieval at field scales.
NASA Astrophysics Data System (ADS)
Rodeghiero, Mirco; Martinez, Cristina; Gianelle, Damiano; Camin, Federica; Zanotelli, Damiano; Magnani, Federico
2013-04-01
Terrestrial plant carbon partitioning to above- and below-ground compartments can be better understood by integrating studies on biomass allocation and estimates of root carbon input based on the use of stable isotopes. These experiments are essential to model ecosystem's metabolism and predict the effects of global change on carbon cycling. Using in-growth soil cores in conjunction with the 13C natural abundance method we quantified net plant-derived root carbon input into the soil, which has been pointed out as the main unaccounted NPP (net primary productivity) component. Four land use types located in the Trentino Region (northern Italy) and representing a range of aboveground net primary productivity (ANPP) values (155-868 gC m-2 y-1) were investigated: conifer forest, apple orchard, vineyard and grassland. Cores, filled with soil of a known C4 isotopic signature were inserted at 18 sampling points for each site and left in place for twelve months. After extraction, cores were analysed for %C and d13C, which were used to calculate the proportion of new plant-derived root C input by applying a mass balance equation. The GPP (gross primary productivity) of each ecosystem was determined by the eddy covariance technique whereas ANPP was quantified with a repeated inventory approach. We found a strong and significant relationship (R2 = 0.93; p=0.03) between ANPP and the fraction of GPP transferred to the soil as root C input across the investigated sites. This percentage varied between 10 and 25% of GPP with the grassland having the lowest value and the apple orchard the highest. Mechanistic ecosystem carbon balance models could benefit from this general relationship since ANPP is routinely and easily measured at many sites. This result also suggests that by quantifying site-specific ANPP, root carbon input can be reliably estimated, as opposed to using arbitrary root/shoot ratios which may under- or over-estimate C partitioning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sastry, S. S.; Desoer, C. A.
1980-01-01
Fixed point methods from nonlinear anaysis are used to establish conditions under which the uniform complete controllability of linear time-varying systems is preserved under non-linear perturbations in the state dynamics and the zero-input uniform complete observability of linear time-varying systems is preserved under non-linear perturbation in the state dynamics and output read out map. Algorithms for computing the specific input to steer the perturbed systems from a given initial state to a given final state are also presented. As an application, a very specific emergency control of an interconnected power system is formulated as a steering problem and it ismore » shown that this emergency control is indeed possible in finite time.« less
Use of MODIS Data in Dynamic SPARROW Analysis of Watershed Loading Reductions
NASA Astrophysics Data System (ADS)
Smith, R. A.; Schwarz, G. E.; Brakebill, J. W.; Hoos, A.; Moore, R. B.; Nolin, A. W.; Shih, J. S.; Journey, C. A.; Macauley, M.
2014-12-01
Predicting the temporal response of stream water quality to a proposed reduction in contaminant loading is a major watershed management problem due to temporary storage of contaminants in groundwater, vegetation, snowpack, etc. We describe the response of dynamically calibrated SPARROW models of total nitrogen (TN) flux to hypothetical reductions in reactive nitrogen inputs in three sub-regional watersheds: Potomac River Basin (Chesapeake Bay drainage), Long Island Sound drainage, and South Carolina coastal drainage. The models are based on seasonal water quality and watershed input data from 170 monitoring stations for the period 2002 to 2008.The spatial reference frames of the three models are stream networks containing an average 38,000 catchments and the time step is seasonal. We use MODIS Enhanced Vegetation Index (EVI) and snow/ice cover data to parameterize seasonal uptake and release of nitrogen from vegetation and snowpack. The model accounts for storage of total nitrogen inputs from fertilized cropland, pasture, urban land, and atmospheric deposition. Model calibration is by non-linear regression. Model source terms based on previous season export allow for recursive simulation of stream flux and can be used to estimate the approximate residence times of TN in the watersheds. Catchment residence times in the Long Island Sound Basin are shorter (typically < 1 year) than in the Potomac or South Carolina Basins (typically > 1 year), in part, because a significant fraction of nitrogen flux derives from snowmelt and occurs within one season of snowfall. We use the calibrated models to examine the response of TN flux to hypothetical step reductions in source inputs at the beginning of the 2002-2008 period and the influence of observed fluctuations in precipitation, temperature, vegetation growth and snow melt over the period. Following non-point source reductions of up to 100%, stream flux was found to continue to vary greatly for several years as a function of seasonal conditions, with high values in both winter (January, February, March) and spring due to high precipitation and snow melt, but much lower summer yields due to low precipitation and nitrogen retention in growing vegetation (EVI). Temporal variations in stream flux are large enough to potentially mask water quality improvements for several years.
Analysis, design, and control of a transcutaneous power regulator for artificial hearts.
Qianhong Chen; Siu Chung Wong; Tse, C K; Xinbo Ruan
2009-02-01
Based on a generic transcutaneous transformer model, a remote power supply using a resonant topology for use in artificial hearts is analyzed and designed for easy controllability and high efficiency. The primary and secondary windings of the transcutaneous transformer are positioned outside and inside the human body, respectively. In such a transformer, the alignment and gap may change with external positioning. As a result, the coupling coefficient of the transcutaneous transformer is also varying, and so are the two large leakage inductances and the mutual inductance. Resonant-tank circuits with varying resonant-frequency are formed from the transformer inductors and external capacitors. For a given range of coupling coefficients, an operating frequency corresponding to a particular coupling coefficient can be found, for which the voltage transfer function is insensitive to load. Prior works have used frequency modulation to regulate the output voltage under varying load and transformer coupling. The use of frequency modulation may require a wide control frequency range which may extend well above the load insensitive frequency. In this paper, study of the input-to-output voltage transfer function is carried out, and a control method is proposed to lock the switching frequency at just above the load insensitive frequency for optimized efficiency at heavy loads. Specifically, operation at above resonant of the resonant circuits is maintained under varying coupling-coefficient. Using a digital-phase-lock-loop (PLL), zero-voltage switching is achieved in a full-bridge converter which is also programmed to provide output voltage regulation via pulsewidth modulation (PWM). A prototype transcutaneous power regulator is built and found to to perform excellently with high efficiency and tight regulation under variations of the alignment or gap of the transcutaneous transformer, load and input voltage.
Effective techniques for the identification and accommodation of disturbances
NASA Technical Reports Server (NTRS)
Johnson, C. D.
1989-01-01
The successful control of dynamic systems such as space stations, or launch vehicles, requires a controller design methodology that acknowledges and addresses the disruptive effects caused by external and internal disturbances that inevitably act on such systems. These disturbances, technically defined as uncontrollable inputs, typically vary with time in an uncertain manner and usually cannot be directly measured in real time. A relatively new non-statistical technique for modeling, and (on-line) identification, of those complex uncertain disturbances that are not as erratic and capricious as random noise is described. This technique applies to multi-input cases and to many of the practical disturbances associated with the control of space stations, or launch vehicles. Then, a collection of smart controller design techniques that allow controlled dynamic systems, with possible multi-input controls, to accommodate (cope with) such disturbances with extraordinary effectiveness are associated. These new smart controllers are designed by non-statistical techniques and typically turn out to be unconventional forms of dynamic linear controllers (compensators) with constant coefficients. The simplicity and reliability of linear, constant coefficient controllers is well-known in the aerospace field.
Nonlinear Transfer of Signal and Noise Correlations in Cortical Networks
Lyamzin, Dmitry R.; Barnes, Samuel J.; Donato, Roberta; Garcia-Lazaro, Jose A.; Keck, Tara
2015-01-01
Signal and noise correlations, a prominent feature of cortical activity, reflect the structure and function of networks during sensory processing. However, in addition to reflecting network properties, correlations are also shaped by intrinsic neuronal mechanisms. Here we show that spike threshold transforms correlations by creating nonlinear interactions between signal and noise inputs; even when input noise correlation is constant, spiking noise correlation varies with both the strength and correlation of signal inputs. We characterize these effects systematically in vitro in mice and demonstrate their impact on sensory processing in vivo in gerbils. We also find that the effects of nonlinear correlation transfer on cortical responses are stronger in the synchronized state than in the desynchronized state, and show that they can be reproduced and understood in a model with a simple threshold nonlinearity. Since these effects arise from an intrinsic neuronal property, they are likely to be present across sensory systems and, thus, our results are a critical step toward a general understanding of how correlated spiking relates to the structure and function of cortical networks. PMID:26019325
What determines transitions between energy- and moisture-limited evaporative regimes?
NASA Astrophysics Data System (ADS)
Haghighi, E.; Gianotti, D.; Akbar, R.; Salvucci, G.; Entekhabi, D.
2017-12-01
The relationship between evaporative fraction (EF) and soil moisture (SM) has traditionally been used in atmospheric and land-surface modeling communities to determine the strength of land-atmosphere coupling in the context of the dominant evaporative regime (energy- or moisture-limited). However, recent field observations reveal that EF-SM relationship is not unique and could vary substantially with surface and/or meteorological conditions. This implies that conventional EF-SM relationships (exclusive of surface and meteorological conditions) are embedded in more complex dependencies and that in fact it is a multi-dimensional function. To fill the fundamental knowledge gaps on the important role of varying surface and meteorological conditions not accounted for by the traditional evaporative regime conceptualization, we propose a generalized EF framework using a mechanistic pore-scale model for evaporation and energy partitioning over drying soil surfaces. Nonlinear interactions among the components of the surface energy balance are reflected in a critical SM that marks the onset of transition between energy- and moisture-limited evaporative regimes. The new generalized EF framework enables physically based estimates of the critical SM, and provides new insights into the origin of land surface EF partitioning linked to meteorological input data and the evolution of land surface temperature during surface drying that affect the relative efficiency of surface energy balance components. Our results offer new opportunities to advance predictive capabilities quantifying land-atmosphere coupling for a wide range of present and projected meteorological input data.
Biosurveillance applying scan statistics with multiple, disparate data sources.
Burkom, Howard S
2003-06-01
Researchers working on the Department of Defense Global Emerging Infections System (DoD-GEIS) pilot system, the Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE), have applied scan statistics for early outbreak detection using both traditional and nontraditional data sources. These sources include medical data indexed by International Classification of Disease, 9th Revision (ICD-9) diagnosis codes, as well as less-specific, but potentially timelier, indicators such as records of over-the-counter remedy sales and of school absenteeism. Early efforts employed the Kulldorff scan statistic as implemented in the SaTScan software of the National Cancer Institute. A key obstacle to this application is that the input data streams are typically based on time-varying factors, such as consumer behavior, rather than simply on the populations of the component subregions. We have used both modeling and recent historical data distributions to obtain background spatial distributions. Data analyses have provided guidance on how to condition and model input data to avoid excessive clustering. We have used this methodology in combining data sources for both retrospective studies of known outbreaks and surveillance of high-profile events of concern to local public health authorities. We have integrated the scan statistic capability into a Microsoft Access-based system in which we may include or exclude data sources, vary time windows separately for different data sources, censor data from subsets of individual providers or subregions, adjust the background computation method, and run retrospective or simulated studies.
Step-control of electromechanical systems
Lewis, Robert N.
1979-01-01
The response of an automatic control system to a general input signal is improved by applying a test input signal, observing the response to the test input signal and determining correctional constants necessary to provide a modified input signal to be added to the input to the system. A method is disclosed for determining correctional constants. The modified input signal, when applied in conjunction with an operating signal, provides a total system output exhibiting an improved response. This method is applicable to open-loop or closed-loop control systems. The method is also applicable to unstable systems, thus allowing controlled shut-down before dangerous or destructive response is achieved and to systems whose characteristics vary with time, thus resulting in improved adaptive systems.
The application of geostationary propagation models to non-geostationary propagation measurements
NASA Astrophysics Data System (ADS)
Haddock, Paul Christopher
Atmospheric attenuation becomes evident above 10 GHz due to the absorption of microwave energy from the molecular motion of the atmospheric constituents. Atmospheric effects on satellite communications systems operating at frequencies greater than 10 GHz become more pronounced. Most geostationary (GEO) climate models, which predict the fading statistics for earth-space telecommunications, have satellite elevation angle as one of the input parameters. There has been an interest in the industry to apply the propagation models developed for the GEO satellites to the non-geostationary (NGO) satellite case. With the NGO satellites, the elevation angle to the satellite is time-variable, and as a result the earth-space propagation medium is time varying. We can calculate the expected probability that a satellite, in a given orbit, will be found at a given elevation angle as a percentage of the year based on the satellite orbital elements, the minimum elevation angle allowed in the constellation operation plan, and the constellation configuration. From this calculation, we can develop an empirical fit to a given probability density function (PDF) to account for the distribution of elevation angles. This PDF serves as a weighting function for the elevation input into the GEO climate model to produce the overall fading statistics for the NGO case. In this research, a Ka-band total power radiometer was developed to measure the down-dwelling incoherent radiant electromagnetic energy from the atmosphere. This whole sky sampling radiometer collected 1 year of radiometric measurements. These observations occurred at varying elevation and azimuthal angles, in close proximity to a weak water vapor absorption line. By referencing the output power of the radiometer to known radiometric emissions and by performing frequent internal calibrations, the developed radiometer provided long term highly accurate and stable low-level derived attenuation measurements. By correlating the 1 year of atmospheric measurements to the modified GEO climate model, the hypothesis is tested. That by application of the proper elevation weighting factors, the GEO model is applicable to the NGO case, where the time-varying angle changes are occurring on a short-time period. Finally, we look at the joint statistics of multiple link failures. Using the 1 year of observed attenuations for multiple sky sections, we show that for a given sky section what the probability is that its attenuation level will be equaled or exceeded for each of the remaining sky sections.
Post-Launch Calibration and Testing of Space Weather Instruments on GOES-R Satellite
NASA Technical Reports Server (NTRS)
Tadikonda, S. K.; Merrow, Cynthia S.; Kronenwetter, Jeffrey A.; Comeyne, Gustave J.; Flanagan, Daniel G.; Todrita, Monica
2016-01-01
The Geostationary Operational Environmental Satellite - R (GOES-R) is the first of a series of satellites to be launched, with the first launch scheduled for October 2016. The three instruments Solar UltraViolet Imager (SUVI), Extreme ultraviolet and X-ray Irradiance Sensor (EXIS), and Space Environment In-Situ Suite (SEISS) provide the data needed as inputs for the product updates National Oceanic and Atmospheric Administration (NOAA) provides to the public. SUVI is a full-disk extreme ultraviolet imager enabling Active Region characterization, filament eruption, and flare detection. EXIS provides inputs to solar back-ground-sevents impacting climate models. SEISS provides particle measurements over a wide energy-and-flux range that varies by several orders of magnitude and these data enable updates to spacecraft charge models for electrostatic discharge. EXIS and SEISS have been tested and calibrated end-to-end in ground test facilities around the United States. Due to the complexity of the SUVI design, data from component tests were used in a model to predict on-orbit performance. The ground tests and model updates provided inputs for designing the on-orbit calibration tests. A series of such tests have been planned for the Post-Launch Testing (PLT) of each of these instruments, and specific parameters have been identified that will be updated in the Ground Processing Algorithms, on-orbit parameter tables, or both. Some of SUVI and EXIS calibrations require slewing them off the Sun, while no such maneuvers are needed for SEISS. After a six-month PLT period the GOES-R is expected to be operational. The calibration details are presented in this paper.
Post-Launch Calibration and Testing of Space Weather Instruments on GOES-R Satellite
NASA Technical Reports Server (NTRS)
Tadikonda, Sivakumara S. K.; Merrow, Cynthia S.; Kronenwetter, Jeffrey A.; Comeyne, Gustave J.; Flanagan, Daniel G.; Todirita, Monica
2016-01-01
The Geostationary Operational Environmental Satellite - R (GOES-R) is the first of a series of satellites to be launched, with the first launch scheduled for October 2016. The three instruments - Solar Ultra Violet Imager (SUVI), Extreme ultraviolet and X-ray Irradiance Sensor (EXIS), and Space Environment In-Situ Suite (SEISS) provide the data needed as inputs for the product updates National Oceanic and Atmospheric Administration (NOAA) provides to the public. SUVI is a full-disk extreme ultraviolet imager enabling Active Region characterization, filament eruption, and flare detection. EXIS provides inputs to solar backgrounds/events impacting climate models. SEISS provides particle measurements over a wide energy-and-flux range that varies by several orders of magnitude and these data enable updates to spacecraft charge models for electrostatic discharge. EXIS and SEISS have been tested and calibrated end-to-end in ground test facilities around the United States. Due to the complexity of the SUVI design, data from component tests were used in a model to predict on-orbit performance. The ground tests and model updates provided inputs for designing the on-orbit calibration tests. A series of such tests have been planned for the Post-Launch Testing (PLT) of each of these instruments, and specific parameters have been identified that will be updated in the Ground Processing Algorithms, on-orbit parameter tables, or both. Some of SUVI and EXIS calibrations require slewing them off the Sun, while no such maneuvers are needed for SEISS. After a six-month PLT period the GOES-R is expected to be operational. The calibration details are presented in this paper.
A new look at mobility metrics for pyroclastic density currents: collection, interpretation, and use
NASA Astrophysics Data System (ADS)
Ogburn, S. E.; Lopes, D.; Calder, E. S.
2012-12-01
Mitigation of risk associated with pyroclastic density currents (PDCs) depends upon accurate forecasting of possible flow paths, often using empirical models that rely on mobility metrics or the stochastic application of computational flow models. Mobility metrics often inform computational models, sometimes as direct model inputs (e.g. energy cone model), or as estimates for input parameters (e.g. basal friction parameter in TITAN2D). These mobility metrics are often compiled from PDCs at many volcanoes, generalized to reveal empirical constants, or sampled for use in probabilistic models. In practice, however, there are often inconsistencies in how mobility metrics have been collected, reported, and used. For instance, the runout of PDCs often varies depending on the method used (e.g. manually measured from a paper map, automated using GIS software); and the distance traveled by the center of mass of PDCs is rarely reported due to the difficulty in locating it. This work reexamines the way we measure, report, and analyze PDC mobility metrics. Several metrics, such as the Heim coefficient (height dropped/runout, H/L) and the proportionality of inundated area to volume (A∝V2/3) have been used successfully with PDC data (Sparks 1976; Nairn and Self 1977; Sheridan 1979; Hayashi and Self 1992; Calder et al. 1999; Widiwijayanti et al. 2008) in addition to the non-volcanic flows they were originally developed for. Other mobility metrics have been investigated by the debris avalanche community but have not yet been extensively applied to pyroclastic flows (e.g. the initial aspect ratio of collapsing pile). We investigate the relative merits and suitability of contrasting mobility metrics for different types of PDCs (e.g. dome-collapse pyroclastic flows, ash-cloud surges, pumice flows), and indicate certain circumstances under which each model performs optimally. We show that these metrics can be used (with varying success) to predict the runout of a PDC of given volume, or vice versa. The problem of locating the center of mass of PDCs is also investigated by comparing field measurements, geometric centroids, linear thickness models, and computational flow models. Comparing center of mass measurements with runout provides insight into the relative roles of sliding vs. spreading in PDC emplacement. The effect of topography on mobility is explored by comparing mobility metrics to valley morphology measurements, including sinuosity, cross-sectional area, and valley slope. Lastly, we examine the problem of compiling and generalizing mobility data from worldwide databases using a hierarchical Bayes model for weighting mobility metrics for use as model inputs, which offers an improved method over simple space-filling strategies. This is especially useful for calibrating models at data-sparse volcanoes.
A Comparison of Agent-Based Models and the Parametric G-Formula for Causal Inference.
Murray, Eleanor J; Robins, James M; Seage, George R; Freedberg, Kenneth A; Hernán, Miguel A
2017-07-15
Decision-making requires choosing from treatments on the basis of correctly estimated outcome distributions under each treatment. In the absence of randomized trials, 2 possible approaches are the parametric g-formula and agent-based models (ABMs). The g-formula has been used exclusively to estimate effects in the population from which data were collected, whereas ABMs are commonly used to estimate effects in multiple populations, necessitating stronger assumptions. Here, we describe potential biases that arise when ABM assumptions do not hold. To do so, we estimated 12-month mortality risk in simulated populations differing in prevalence of an unknown common cause of mortality and a time-varying confounder. The ABM and g-formula correctly estimated mortality and causal effects when all inputs were from the target population. However, whenever any inputs came from another population, the ABM gave biased estimates of mortality-and often of causal effects even when the true effect was null. In the absence of unmeasured confounding and model misspecification, both methods produce valid causal inferences for a given population when all inputs are from that population. However, ABMs may result in bias when extrapolated to populations that differ on the distribution of unmeasured outcome determinants, even when the causal network linking variables is identical. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Coffey, Victoria N.; Parker, Linda N.; Blackwell, William C., Jr.; Jun, Insoo; Garrett, Henry B.
2007-01-01
The NUMIT 1-dimensional bulk charging model is used as a screening to ol for evaluating time-dependent bulk internal or deep dielectric) ch arging of dielectrics exposed to penetrating electron environments. T he code is modified to accept time dependent electron flux time serie s along satellite orbits for the electron environment inputs instead of using the static electron flux environment input originally used b y the code and widely adopted in bulk charging models. Application of the screening technique ts demonstrated for three cases of spacecraf t exposure within the Earth's radiation belts including a geostationa ry transfer orbit and an Earth-Moon transit trajectory for a range of orbit inclinations. Electric fields and charge densities are compute d for dielectric materials with varying electrical properties exposed to relativistic electron environments along the orbits. Our objectiv e is to demonstrate a preliminary application of the time-dependent e nvironments input to the NUMIT code for evaluating charging risks to exposed dielectrics used on spacecraft when exposed to the Earth's ra diation belts. The results demonstrate that the NUMIT electric field values in GTO orbits with multiple encounters with the Earth's radiat ion belts are consistent with previous studies of charging in GTO orb its and that potential threat conditions for electrostatic discharge exist on lunar transit trajectories depending on the electrical proper ties of the materials exposed to the radiation environment.
Next generation lightweight mirror modeling software
NASA Astrophysics Data System (ADS)
Arnold, William R.; Fitzgerald, Matthew; Rosa, Rubin Jaca; Stahl, H. Philip
2013-09-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 3-5 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any text editor, all the shell thickness parameters and suspension spring rates are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.
Schaffranek, Raymond W.
2004-01-01
A numerical model for simulation of surface-water integrated flow and transport in two (horizontal-space) dimensions is documented. The model solves vertically integrated forms of the equations of mass and momentum conservation and solute transport equations for heat, salt, and constituent fluxes. An equation of state for salt balance directly couples solution of the hydrodynamic and transport equations to account for the horizontal density gradient effects of salt concentrations on flow. The model can be used to simulate the hydrodynamics, transport, and water quality of well-mixed bodies of water, such as estuaries, coastal seas, harbors, lakes, rivers, and inland waterways. The finite-difference model can be applied to geographical areas bounded by any combination of closed land or open water boundaries. The simulation program accounts for sources of internal discharges (such as tributary rivers or hydraulic outfalls), tidal flats, islands, dams, and movable flow barriers or sluices. Water-quality computations can treat reactive and (or) conservative constituents simultaneously. Input requirements include bathymetric and topographic data defining land-surface elevations, time-varying water level or flow conditions at open boundaries, and hydraulic coefficients. Optional input includes the geometry of hydraulic barriers and constituent concentrations at open boundaries. Time-dependent water level, flow, and constituent-concentration data are required for model calibration and verification. Model output consists of printed reports and digital files of numerical results in forms suitable for postprocessing by graphical software programs and (or) scientific visualization packages. The model is compatible with most mainframe, workstation, mini- and micro-computer operating systems and FORTRAN compilers. This report defines the mathematical formulation and computational features of the model, explains the solution technique and related model constraints, describes the model framework, documents the type and format of inputs required, and identifies the type and format of output available.
Impact of inherent meteorology uncertainty on air quality ...
It is well established that there are a number of different classifications and sources of uncertainties in environmental modeling systems. Air quality models rely on two key inputs, namely, meteorology and emissions. When using air quality models for decision making, it is important to understand how uncertainties in these inputs affect the simulated concentrations. Ensembles are one method to explore how uncertainty in meteorology affects air pollution concentrations. Most studies explore this uncertainty by running different meteorological models or the same model with different physics options and in some cases combinations of different meteorological and air quality models. While these have been shown to be useful techniques in some cases, we present a technique that leverages the initial condition perturbations of a weather forecast ensemble, namely, the Short-Range Ensemble Forecast system to drive the four-dimensional data assimilation in the Weather Research and Forecasting (WRF)-Community Multiscale Air Quality (CMAQ) model with a key focus being the response of ozone chemistry and transport. Results confirm that a sizable spread in WRF solutions, including common weather variables of temperature, wind, boundary layer depth, clouds, and radiation, can cause a relatively large range of ozone-mixing ratios. Pollutant transport can be altered by hundreds of kilometers over several days. Ozone-mixing ratios of the ensemble can vary as much as 10–20 ppb
NASA Astrophysics Data System (ADS)
Zhang, X.; Srinivasan, R.
2008-12-01
In this study, a user friendly GIS tool was developed for evaluating and improving NEXRAD using raingauge data. This GIS tool can automatically read in raingauge and NEXRAD data, evaluate the accuracy of NEXRAD for each time unit, implement several geostatistical methods to improve the accuracy of NEXRAD through raingauge data, and output spatial precipitation map for distributed hydrologic model. The geostatistical methods incorporated in this tool include Simple Kriging with varying local means, Kriging with External Drift, Regression Kriging, Co-Kriging, and a new geostatistical method that was newly developed by Li et al. (2008). This tool was applied in two test watersheds at hourly and daily temporal scale. The preliminary cross-validation results show that incorporating raingauge data to calibrate NEXRAD can pronouncedly change the spatial pattern of NEXRAD and improve its accuracy. Using different geostatistical methods, the GIS tool was applied to produce long term precipitation input for a distributed hydrologic model - Soil and Water Assessment Tool (SWAT). Animated video was generated to vividly illustrate the effect of using different precipitation input data on distributed hydrologic modeling. Currently, this GIS tool is developed as an extension of SWAT, which is used as water quantity and quality modeling tool by USDA and EPA. The flexible module based design of this tool also makes it easy to be adapted for other hydrologic models for hydrological modeling and water resources management.
Dynamic modeling of neuronal responses in fMRI using cubature Kalman filtering
Havlicek, Martin; Friston, Karl J.; Jan, Jiri; Brazdil, Milan; Calhoun, Vince D.
2011-01-01
This paper presents a new approach to inverting (fitting) models of coupled dynamical systems based on state-of-the-art (cubature) Kalman filtering. Crucially, this inversion furnishes posterior estimates of both the hidden states and parameters of a system, including any unknown exogenous input. Because the underlying generative model is formulated in continuous time (with a discrete observation process) it can be applied to a wide variety of models specified with either ordinary or stochastic differential equations. These are an important class of models that are particularly appropriate for biological time-series, where the underlying system is specified in terms of kinetics or dynamics (i.e., dynamic causal models). We provide comparative evaluations with generalized Bayesian filtering (dynamic expectation maximization) and demonstrate marked improvements in accuracy and computational efficiency. We compare the schemes using a series of difficult (nonlinear) toy examples and conclude with a special focus on hemodynamic models of evoked brain responses in fMRI. Our scheme promises to provide a significant advance in characterizing the functional architectures of distributed neuronal systems, even in the absence of known exogenous (experimental) input; e.g., resting state fMRI studies and spontaneous fluctuations in electrophysiological studies. Importantly, unlike current Bayesian filters (e.g. DEM), our scheme provides estimates of time-varying parameters, which we will exploit in future work on the adaptation and enabling of connections in the brain. PMID:21396454
Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J
2017-12-01
Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Numerical analysis of the heat source characteristics of a two-electrode TIG arc
NASA Astrophysics Data System (ADS)
Ogino, Y.; Hirata, Y.; Nomura, K.
2011-06-01
Various kinds of multi-electrode welding processes are used to ensure high productivity in industrial fields such as shipbuilding, automotive manufacturing and pipe fabrication. However, it is difficult to obtain the optimum welding conditions for a specific product, because there are many operating parameters, and because welding phenomena are very complicated. In the present research, the heat source characteristics of a two-electrode TIG arc were numerically investigated using a 3D arc plasma model with a focus on the distance between the two electrodes. The arc plasma shape changed significantly, depending on the electrode spacing. The heat source characteristics, such as the heat input density and the arc pressure distribution, changed significantly when the electrode separation was varied. The maximum arc pressure of the two-electrode TIG arc was much lower than that of a single-electrode TIG. However, the total heat input of the two-electrode TIG arc was nearly constant and was independent of the electrode spacing. These heat source characteristics of the two-electrode TIG arc are useful for controlling the heat input distribution at a low arc pressure. Therefore, these results indicate the possibility of a heat source based on a two-electrode TIG arc that is capable of high heat input at low pressures.
NASA Astrophysics Data System (ADS)
Yuan, Chang-Qing; Zhao, Tong-Jun; Zhan, Yong; Zhang, Su-Hua; Liu, Hui; Zhang, Yu-Hong
2009-11-01
Based on the well accepted Hodgkin-Huxley neuron model, the neuronal intrinsic excitability is studied when the neuron is subject to varying environmental temperatures, the typical impact for its regulating ways. With computer simulation, it is found that altering environmental temperature can improve or inhibit the neuronal intrinsic excitability so as to influence the neuronal spiking properties. The impacts from environmental factors can be understood that the neuronal spiking threshold is essentially influenced by the fluctuations in the environment. With the environmental temperature varying, burst spiking is realized for the neuronal membrane voltage because of the environment-dependent spiking threshold. This burst induced by changes in spiking threshold is different from that excited by input currents or other stimulus.
Insolation-oriented model of photovoltaic module using Matlab/Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Huan-Liang
2010-07-15
This paper presents a novel model of photovoltaic (PV) module which is implemented and analyzed using Matlab/Simulink software package. Taking the effect of sunlight irradiance on the cell temperature, the proposed model takes ambient temperature as reference input and uses the solar insolation as a unique varying parameter. The cell temperature is then explicitly affected by the sunlight intensity. The output current and power characteristics are simulated and analyzed using the proposed PV model. The model verification has been confirmed through an experimental measurement. The impact of solar irradiation on cell temperature makes the output characteristic more practical. In addition,more » the insolation-oriented PV model enables the dynamics of PV power system to be analyzed and optimized more easily by applying the environmental parameters of ambient temperature and solar irradiance. (author)« less
Resynchronization of circadian oscillators and the east-west asymmetry of jet-lag
NASA Astrophysics Data System (ADS)
Lu, Zhixin; Klein-Cardeña, Kevin; Lee, Steven; Antonsen, Thomas M.; Girvan, Michelle; Ott, Edward
2016-09-01
Cells in the brain's Suprachiasmatic Nucleus (SCN) are known to regulate circadian rhythms in mammals. We model synchronization of SCN cells using the forced Kuramoto model, which consists of a large population of coupled phase oscillators (modeling individual SCN cells) with heterogeneous intrinsic frequencies and external periodic forcing. Here, the periodic forcing models diurnally varying external inputs such as sunrise, sunset, and alarm clocks. We reduce the dimensionality of the system using the ansatz of Ott and Antonsen and then study the effect of a sudden change of clock phase to simulate cross-time-zone travel. We estimate model parameters from previous biological experiments. By examining the phase space dynamics of the model, we study the mechanism leading to the difference typically experienced in the severity of jet-lag resulting from eastward and westward travel.
Orshansky, Jr., deceased, Elias; Weseloh, William E.
1979-01-01
A power transmission having two planetary assemblies, each having its own carrier and its own planet, sun, and ring gears. A speed-varying module is connected in driving relation to the input shaft and in driving relationship to the two sun gears, which are connected together. The speed-varying means may comprise a pair of hydraulic units hydraulically interconnected so that one serves as a pump while the other serves as a motor and vice versa, one of the units having a variable stroke and being connected in driving relation to the input shaft, the other unit, which may have a fixed stroke, being connected in driving relation to the sun gears. A brake grounds the first carrier in the first range and in reverse and causes drive to be delivered to the output shaft through the first ring gear in a hydrostatic mode, the first ring gear being rigidly connected to the output shaft. The input shaft also is clutchable to either the carrier or the ring gear of the second planetary assembly. The output shaft is also clutchable to the carrier of the second planetary assembly when the input is clutched to the ring gear of the second planetary assembly, and is clutchable to the ring gear of the second planetary assembly when the input is clutched to the carrier thereof.
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-05-18
This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.
Optimal control of LQR for discrete time-varying systems with input delays
NASA Astrophysics Data System (ADS)
Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng
2018-04-01
In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.
Golkar, Mahsa A.; Sobhani Tehrani, Ehsan; Kearney, Robert E.
2017-01-01
Dynamic joint stiffness is a dynamic, nonlinear relationship between the position of a joint and the torque acting about it, which can be used to describe the biomechanics of the joint and associated limb(s). This paper models and quantifies changes in ankle dynamic stiffness and its individual elements, intrinsic and reflex stiffness, in healthy human subjects during isometric, time-varying (TV) contractions of the ankle plantarflexor muscles. A subspace, linear parameter varying, parallel-cascade (LPV-PC) algorithm was used to identify the model from measured input position perturbations and output torque data using voluntary torque as the LPV scheduling variable (SV). Monte-Carlo simulations demonstrated that the algorithm is accurate, precise, and robust to colored measurement noise. The algorithm was then used to examine stiffness changes associated with TV isometric contractions. The SV was estimated from the Soleus EMG using a Hammerstein model of EMG-torque dynamics identified from unperturbed trials. The LPV-PC algorithm identified (i) a non-parametric LPV impulse response function (LPV IRF) for intrinsic stiffness and (ii) a LPV-Hammerstein model for reflex stiffness consisting of a LPV static nonlinearity followed by a time-invariant state-space model of reflex dynamics. The results demonstrated that: (a) intrinsic stiffness, in particular ankle elasticity, increased significantly and monotonically with activation level; (b) the gain of the reflex pathway increased from rest to around 10–20% of subject's MVC and then declined; and (c) the reflex dynamics were second order. These findings suggest that in healthy human ankle, reflex stiffness contributes most at low muscle contraction levels, whereas, intrinsic contributions monotonically increase with activation level. PMID:28579954
Heat Transfer Model for Hot Air Balloons
NASA Astrophysics Data System (ADS)
Llado-Gambin, Adriana
A heat transfer model and analysis for hot air balloons is presented in this work, backed with a flow simulation using SolidWorks. The objective is to understand the major heat losses in the balloon and to identify the parameters that affect most its flight performance. Results show that more than 70% of the heat losses are due to the emitted radiation from the balloon envelope and that convection losses represent around 20% of the total. A simulated heating source is also included in the modeling based on typical thermal input from a balloon propane burner. The burner duty cycle to keep a constant altitude can vary from 10% to 28% depending on the atmospheric conditions, and the ambient temperature is the parameter that most affects the total thermal input needed. The simulation and analysis also predict that the gas temperature inside the balloon decreases at a rate of -0.25 K/s when there is no burner activity, and it increases at a rate of +1 K/s when the balloon pilot operates the burner. The results were compared to actual flight data and they show very good agreement indicating that the major physical processes responsible for balloon performance aloft are accurately captured in the simulation.
Wu, Xiaolin; Davie-Martin, Cleo L; Steinlin, Christine; Hageman, Kimberly J; Cullen, Nicolas J; Bogdal, Christian
2017-10-17
Melting glaciers release previously ice-entrapped chemicals to the surrounding environment. As glacier melting accelerates under future climate warming, chemical release may also increase. This study investigated the behavior of semivolatile pesticides over the course of one year and predicted their behavior under two future climate change scenarios. Pesticides were quantified in air, lake water, glacial meltwater, and streamwater in the catchment of Lake Brewster, an alpine glacier-fed lake located in the Southern Alps of New Zealand. Two historic-use pesticides (endosulfan I and hexachlorobenzene) and three current-use pesticides (dacthal, triallate, and chlorpyrifos) were frequently found in both air and water samples from the catchment. Regression analysis indicated that the pesticide concentrations in glacial meltwater and lake water were strongly correlated. A multimedia environmental fate model was developed for these five chemicals in Brewster Lake. Modeling results indicated that seasonal lake ice cover melt, and varying contributions of input from glacial melt and streamwater, created pulses in pesticide concentrations in lake water. Under future climate scenarios, the concentration pulse was altered and glacial melt made a greater contribution (as mass flux) to pesticide input in the lake water.
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grierson, B. A.; Yuan, X.; Gorelenkova, M.
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Collaborative learning framework for online stakeholder engagement.
Khodyakov, Dmitry; Savitsky, Terrance D; Dalal, Siddhartha
2016-08-01
Public and stakeholder engagement can improve the quality of both research and policy decision making. However, such engagement poses significant methodological challenges in terms of collecting and analysing input from large, diverse groups. To explain how online approaches can facilitate iterative stakeholder engagement, to describe how input from large and diverse stakeholder groups can be analysed and to propose a collaborative learning framework (CLF) to interpret stakeholder engagement results. We use 'A National Conversation on Reducing the Burden of Suicide in the United States' as a case study of online stakeholder engagement and employ a Bayesian data modelling approach to develop a CLF. Our data modelling results identified six distinct stakeholder clusters that varied in the degree of individual articulation and group agreement and exhibited one of the three learning styles: learning towards consensus, learning by contrast and groupthink. Learning by contrast was the most common, or dominant, learning style in this study. Study results were used to develop a CLF, which helps explore multitude of stakeholder perspectives; identifies clusters of participants with similar shifts in beliefs; offers an empirically derived indicator of engagement quality; and helps determine the dominant learning style. The ability to detect learning by contrast helps illustrate differences in stakeholder perspectives, which may help policymakers, including Patient-Centered Outcomes Research Institute, make better decisions by soliciting and incorporating input from patients, caregivers, health-care providers and researchers. Study results have important implications for soliciting and incorporating input from stakeholders with different interests and perspectives. © 2015 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
Grierson, B. A.; Yuan, X.; Gorelenkova, M.; ...
2018-02-21
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
A data-centric approach to understanding the pricing of financial options
NASA Astrophysics Data System (ADS)
Healy, J.; Dixon, M.; Read, B.; Cai, F. F.
2002-05-01
We investigate what can be learned from a purely phenomenological study of options prices without modelling assumptions. We fitted neural net (NN) models to LIFFE ``ESX'' European style FTSE 100 index options using daily data from 1992 to 1997. These non-parametric models reproduce the Black-Scholes (BS) analytic model in terms of fit and performance measures using just the usual five inputs (S, X, t, r, IV). We found that adding transaction costs (bid-ask spread) to these standard five parameters gives a comparable fit and performance. Tests show that the bid-ask spread can be a statistically significant explanatory variable for option prices. The difference in option prices between the models with transaction costs and those without ranges from about -3.0 to +1.5 index points, varying with maturity date. However, the difference depends on the moneyness (S/X), being greatest in-the-money. This suggests that use of a five-factor model can result in a pricing difference of up to #10 to #30 per call option contract compared with modelling under transaction costs. We found that the influence of transaction costs varied between different yearly subsets of the data. Open interest is also a significant explanatory variable, but volume is not.
The Role of Learner and Input Variables in Learning Inflectional Morphology
ERIC Educational Resources Information Center
Brooks, Patricia J.; Kempe, Vera; Sionov, Ariel
2006-01-01
To examine effects of input and learner characteristics on morphology acquisition, 60 adult English speakers learned to inflect masculine and feminine Russian nouns in nominative, dative, and genitive cases. By varying training vocabulary size (i.e., type variability), holding constant the number of learning trials, we tested whether learners…
Songs as Ambient Language Input in Phonology Acquisition
ERIC Educational Resources Information Center
Au, Terry Kit-fong
2013-01-01
Children cannot learn to speak a language simply from occasional noninteractive exposure to native speakers' input (e.g., by hearing television dialogues), but can they learn something about its phonology? To answer this question, the present study varied ambient hearing experience for 126 5- to 7-year-old native Cantonese-Chinese speakers…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Jesse D.; Grace Chang; Jason Magalen
A n indust ry standard wave modeling tool was utilized to investigate model sensitivity to input parameters and wave energy converter ( WEC ) array deploym ent scenarios. Wave propagation was investigated d ownstream of the WECs to evaluate overall near - and far - field effects of WEC arrays. The sensitivity study illustrate d that b oth wave height and near - bottom orbital velocity we re subject to the largest pote ntial variations, each decreas ed in sensitivity as transmission coefficient increase d , as number and spacing of WEC devices decrease d , and as the deploymentmore » location move d offshore. Wave direction wa s affected consistently for all parameters and wave perio d was not affected (or negligibly affected) by varying model parameters or WEC configuration .« less
Evaluation of the Emergency Response Dose Assessment System(ERDAS)
NASA Technical Reports Server (NTRS)
Evans, Randolph J.; Lambert, Winifred C.; Manobianco, John T.; Taylor, Gregory E.; Wheeler, Mark M.; Yersavich, Ann M.
1996-01-01
The emergency response dose assessment system (ERDAS) is a protype software and hardware system configured to produce routine mesoscale meteorological forecasts and enhanced dispersion estimates on an operational basis for the Kennedy Space Center (KSC)/Cape Canaveral Air Station (CCAS) region. ERDAS provides emergency response guidance to operations at KSC/CCAS in the case of an accidental hazardous material release or an aborted vehicle launch. This report describes the evaluation of ERDAS including: evaluation of sea breeze predictions, comparison of launch plume location and concentration predictions, case study of a toxic release, evaluation of model sensitivity to varying input parameters, evaluation of the user interface, assessment of ERDA's operational capabilities, and a comparison of ERDAS models to the ocean breeze dry gultch diffusion model.
NASA Technical Reports Server (NTRS)
Cook, A. B.; Fuller, C. R.; O'Brien, W. F.; Cabell, R. H.
1992-01-01
A method of indirectly monitoring component loads through common flight variables is proposed which requires an accurate model of the underlying nonlinear relationships. An artificial neural network (ANN) model learns relationships through exposure to a database of flight variable records and corresponding load histories from an instrumented military helicopter undergoing standard maneuvers. The ANN model, utilizing eight standard flight variables as inputs, is trained to predict normalized time-varying mean and oscillatory loads on two critical components over a range of seven maneuvers. Both interpolative and extrapolative capabilities are demonstrated with agreement between predicted and measured loads on the order of 90 percent to 95 percent. This work justifies pursuing the ANN method of predicting loads from flight variables.
A Reexamination of the Emergy Input to a System from the ...
The wind energy absorbed in the global boundary layer (GBL, 900 mb surface) is the basis for calculating the wind emergy input for any system on the Earth’s surface. Estimates of the wind emergy input to a system depend on the amount of wind energy dissipated, which can have a range of magnitudes for a given velocity depending on surface drag and atmospheric stability at the location and time period under study. In this study, we develop a method to consider this complexity in estimating the emergy input to a system from the wind. A new calculation of the transformity of the wind energy dissipated in the GBL (900 mb surface) based on general models of atmospheric circulation in the planetary boundary layer (PBL, 100 mb surface) is presented and expressed on the 12.0E+24 seJ y-1 geobiosphere baseline to complete the information needed to calculate the emergy input from the wind to the GBL of any system. The average transformity of wind energy dissipated in the GBL (below 900 mb) was 1241±650 sej J-1. The analysis showed that the transformity of the wind varies over the course of a year such that summer processes may require a different wind transformity than processes occurring with a winter or annual time boundary. This is a paper in the proceedings of Emergy Synthesis 9, thus it will be available online for those interested in this subject. The paper describes a new and more accurate way to estimate the wind energy input to any system. It also has a new cal
Uncertainty in tsunami sediment transport modeling
Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.
2016-01-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.
Numerical Analysis of Modeling Based on Improved Elman Neural Network
Jie, Shao
2014-01-01
A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172
Studying the mechanisms of language learning by varying the learning environment and the learner
Goldin-Meadow, Susan
2015-01-01
Language learning is a resilient process, and many linguistic properties can be developed under a wide range of learning environments and learners. The first goal of this review is to describe properties of language that can be developed without exposure to a language model – the resilient properties of language – and to explore conditions under which more fragile properties emerge. But even if a linguistic property is resilient, the developmental course that the property follows is likely to vary as a function of learning environment and learner, that is, there are likely to be individual differences in the learning trajectories children follow. The second goal is to consider how the resilient properties are brought to bear on language learning when a child is exposed to a language model. The review ends by considering the implications of both sets of findings for mechanisms, focusing on the role that the body and linguistic input play in language learning. PMID:26668813
Studying the mechanisms of language learning by varying the learning environment and the learner.
Goldin-Meadow, Susan
Language learning is a resilient process, and many linguistic properties can be developed under a wide range of learning environments and learners. The first goal of this review is to describe properties of language that can be developed without exposure to a language model - the resilient properties of language - and to explore conditions under which more fragile properties emerge. But even if a linguistic property is resilient, the developmental course that the property follows is likely to vary as a function of learning environment and learner, that is, there are likely to be individual differences in the learning trajectories children follow. The second goal is to consider how the resilient properties are brought to bear on language learning when a child is exposed to a language model. The review ends by considering the implications of both sets of findings for mechanisms, focusing on the role that the body and linguistic input play in language learning.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
NASA Astrophysics Data System (ADS)
Virgen, Matthew Miguel
Two significant goals in solar plant operation are lower cost and higher efficiencies. To achieve those goals, a combined cycle gas turbine (CCGT) system, which uses the hot gas turbine exhaust to produce superheated steam for a bottoming Rankine cycle by way of a heat recovery steam generator (HRSG), is investigated in this work. Building off of a previous gas turbine model created at the Combustion and Solar Energy Laboratory at SDSU, here are added the HRSG and steam turbine model, which had to handle significant change in the mass flow and temperature of air exiting the gas turbine due to varying solar input. A wide range of cases were run to explore options for maximizing both power and efficiency from the proposed CSP CCGT plant. Variable guide vanes (VGVs) were found in the earlier model to be an effective tool in providing operational flexibility to address the variable nature of solar input. Combined cycle efficiencies in the range of 50% were found to result from this plant configuration. However, a combustor inlet temperature (CIT) limit leads to two distinct Modes of operation, with a sharp drop in both plant efficiency and power occurring when the air flow through the receiver exceeded the CIT limit. This drawback can be partially addressed through strategic use of the VGVs. Since system response is fully established for the relevant range of solar input and variable guide vane angles, the System Advisor Model (SAM) from NREL can be used to find what the actual expected solar input would be over the course of the day, and plan accordingly. While the SAM software is not yet equipped to model a Brayton cycle cavity receiver, appropriate approximations were made in order to produce a suitable heliostat field to fit this system. Since the SPHER uses carbon nano-particles as the solar absorbers, questions of particle longevity and how the particles might affect the flame behavior in the combustor were addressed using the chemical kinetics software ChemkinPro by modeling the combustion characteristics both with and without the particles. This work is presented in the Appendix.
The impact of bathymetry input on flood simulations
NASA Astrophysics Data System (ADS)
Khanam, M.; Cohen, S.
2017-12-01
Flood prediction and mitigation systems are inevitable for improving public safety and community resilience all over the worldwide. Hydraulic simulations of flood events are becoming an increasingly efficient tool for studying and predicting flood events and susceptibility. A consistent limitation of hydraulic simulations of riverine dynamics is the lack of information about river bathymetry as most terrain data record water surface elevation. The impact of this limitation on the accuracy on hydraulic simulations of flood has not been well studies over a large range of flood magnitude and modeling frameworks. Advancing our understanding of this topic is timely given emerging national and global efforts for developing automated flood predictions systems (e.g. NOAA National Water Center). Here we study the response of flood simulation to the incorporation of different bathymetry and floodplain surveillance source. Different hydraulic models are compared, Mike-Flood, a 2D hydrodynamic model, and GSSHA, a hydrology/hydraulics model. We test a hypothesis that the impact of inclusion/exclusion of bathymetry data on hydraulic model results will vary in its magnitude as a function of river size. This will allow researcher and stake holders more accurate predictions of flood events providing useful information that will help local communities in a vulnerable flood zone to mitigate flood hazards. Also, it will help to evaluate the accuracy and efficiency of different modeling frameworks and gage their dependency on detailed bathymetry input data.
NASA Astrophysics Data System (ADS)
Lee, Cameron C.; Sheridan, Scott C.; Barnes, Brian B.; Hu, Chuanmin; Pirhalla, Douglas E.; Ransibrahmanakul, Varis; Shein, Karsten
2017-10-01
The coastal waters of the southeastern USA contain important protected habitats and natural resources that are vulnerable to climate variability and singular weather events. Water clarity, strongly affected by atmospheric events, is linked to substantial environmental impacts throughout the region. To assess this relationship over the long-term, this study uses an artificial neural network-based time series modeling technique known as non-linear autoregressive models with exogenous input (NARX models) to explore the relationship between climate and a water clarity index (KDI) in this area and to reconstruct this index over a 66-year period. Results show that synoptic-scale circulation patterns, weather types, and precipitation all play roles in impacting water clarity to varying degrees in each region of the larger domain. In particular, turbid water is associated with transitional weather and cyclonic circulation in much of the study region. Overall, NARX model performance also varies—regionally, seasonally and interannually—with wintertime estimates of KDI along the West Florida Shelf correlating to the actual KDI at r > 0.70. Periods of extreme (high) KDI in this area coincide with notable El Niño events. An upward trend in extreme KDI events from 1948 to 2013 is also present across much of the Florida Gulf coast.
NASA Astrophysics Data System (ADS)
Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.
2018-02-01
This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.
Yan, Kun; Liu, Yi; Zhang, Jitao; Correa, Santiago O; Shang, Wu; Tsai, Cheng-Chieh; Bentley, William E; Shen, Jana; Scarcelli, Giuliano; Raub, Christopher B; Shi, Xiao-Wen; Payne, Gregory F
2018-02-12
The growing importance of hydrogels in translational medicine has stimulated the development of top-down fabrication methods, yet often these methods lack the capabilities to generate the complex matrix architectures observed in biology. Here we show that temporally varying electrical signals can cue a self-assembling polysaccharide to controllably form a hydrogel with complex internal patterns. Evidence from theory and experiment indicate that internal structure emerges through a subtle interplay between the electrical current that triggers self-assembly and the electrical potential (or electric field) that recruits and appears to orient the polysaccharide chains at the growing gel front. These studies demonstrate that short sequences (minutes) of low-power (∼1 V) electrical inputs can provide the program to guide self-assembly that yields hydrogels with stable, complex, and spatially varying structure and properties.
Spectral characterization of Martian soil analogues
NASA Technical Reports Server (NTRS)
Agresti, David G.
1987-01-01
As previously reported, reflectance spectra of iron oxide precipitated as ultrafine particles, unlike ordinary fine grained hematite, have significant similarities to reflectance spectra from the bright regions of Mars. These particles were characterized according to composition, magnetic properties, and particle size distribution. Mossbauer, magnetic susceptibility, and optical data were obtained for samples with a range of concentrations of iron oxide in silica gel of varying pore diameters. To analyze the Mossbauer spectra, a versatile fitting program was enhanced to provide user friendly screen input and theoretical models appropriate for the superparamagnetic spectra obtained.
NASA Technical Reports Server (NTRS)
Dunbar, D. N.; Tunnah, B. G.
1978-01-01
A FORTRAN computer program is described for predicting the flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuel of varying end point and hydrogen content specifications. The program has provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case.
NASA Technical Reports Server (NTRS)
Dunbar, D. N.; Tunnah, B. G.
1978-01-01
The FORTRAN computing program predicts flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuels of varying end point and hydrogen content specifications. The program has a provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case.
Advanced Booster Liquid Engine Combustion Stability
NASA Technical Reports Server (NTRS)
Tucker, Kevin; Gentz, Steve; Nettles, Mindy
2015-01-01
Combustion instability is a phenomenon in liquid rocket engines caused by complex coupling between the time-varying combustion processes and the fluid dynamics in the combustor. Consequences of the large pressure oscillations associated with combustion instability often cause significant hardware damage and can be catastrophic. The current combustion stability assessment tools are limited by the level of empiricism in many inputs and embedded models. This limited predictive capability creates significant uncertainty in stability assessments. This large uncertainty then increases hardware development costs due to heavy reliance on expensive and time-consuming testing.
Low-noise encoding of active touch by layer 4 in the somatosensory cortex.
Hires, Samuel Andrew; Gutnisky, Diego A; Yu, Jianing; O'Connor, Daniel H; Svoboda, Karel
2015-08-06
Cortical spike trains often appear noisy, with the timing and number of spikes varying across repetitions of stimuli. Spiking variability can arise from internal (behavioral state, unreliable neurons, or chaotic dynamics in neural circuits) and external (uncontrolled behavior or sensory stimuli) sources. The amount of irreducible internal noise in spike trains, an important constraint on models of cortical networks, has been difficult to estimate, since behavior and brain state must be precisely controlled or tracked. We recorded from excitatory barrel cortex neurons in layer 4 during active behavior, where mice control tactile input through learned whisker movements. Touch was the dominant sensorimotor feature, with >70% spikes occurring in millisecond timescale epochs after touch onset. The variance of touch responses was smaller than expected from Poisson processes, often reaching the theoretical minimum. Layer 4 spike trains thus reflect the millisecond-timescale structure of tactile input with little noise.
NASA Lewis Steady-State Heat Pipe Code Architecture
NASA Technical Reports Server (NTRS)
Mi, Ye; Tower, Leonard K.
2013-01-01
NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given
Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu
2018-06-01
This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.
Angulo-Garcia, David; Berke, Joshua D; Torcini, Alessandro
2016-02-01
Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We find that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.
A wide bandwidth CCD buffer memory system
NASA Technical Reports Server (NTRS)
Siemens, K.; Wallace, R. W.; Robinson, C. R.
1978-01-01
A prototype system was implemented to demonstrate that CCD's can be applied advantageously to the problem of low power digital storage and particularly to the problem of interfacing widely varying data rates. CCD shift register memories (8K bit) were used to construct a feasibility model 128 K-bit buffer memory system. Serial data that can have rates between 150 kHz and 4.0 MHz can be stored in 4K-bit, randomly-accessible memory blocks. Peak power dissipation during a data transfer is less than 7 W, while idle power is approximately 5.4 W. The system features automatic data input synchronization with the recirculating CCD memory block start address. System expansion to accommodate parallel inputs or a greater number of memory blocks can be performed in a modular fashion. Since the control logic does not increase proportionally to increase in memory capacity, the power requirements per bit of storage can be reduced significantly in a larger system.
Thermophysical effects of ointments in cold: an experimental study with a skin model.
Lehmuskallio, E; Anttonen, H
1999-01-01
The use of emollients on the face is a traditional way to protect the skin against cold injuries in cold climate countries like Finland, but their preventive effect against frostbite has been questioned. The purpose of this investigation was to define the thermal insulation and occlusivity of ointments in cold by using a skin model with a sweating hot plate. The properties of four different emollients were studied in both dry and humid conditions simulating transepidermal water loss, sweating, and a combination of sweating and drying. The thermal insulation of ointments applied on a dry surface was minimal. Evaporation of water from an oil-in-water cream caused significant cooling for 40 min after its application. The diffusion of water through the applied emollients changed their thermal effects depending on their composition and on the amount of water. Low input of water increased and high input diminished slightly the thermal resistance of ointments. The minimal or even negative thermal insulation of emollients in varying conditions gives them at best only a negligible and at worst a disadvantageous physical effect against cold.
A distributed analysis of Human impact on global sediment dynamics
NASA Astrophysics Data System (ADS)
Cohen, S.; Kettner, A.; Syvitski, J. P.
2012-12-01
Understanding riverine sediment dynamics is an important undertaking for both socially-relevant issues such as agriculture, water security and infrastructure management and for scientific analysis of landscapes, river ecology, oceanography and other disciplines. Providing good quantitative and predictive tools in therefore timely particularly in light of predicted climate and landuse changes. Ever increasing human activity during the Anthropocene have affected sediment dynamics in two major ways: (1) an increase is hillslope erosion due to agriculture, deforestation and landscape engineering and (2) trapping of sediment in dams and other man-made reservoirs. The intensity and dynamics between these man-made factors vary widely across the globe and in time and are therefore hard to predict. Using sophisticated numerical models is therefore warranted. Here we use a distributed global riverine sediment flux and water discharge model (WBMsed) to compare a pristine (without human input) and disturbed (with human input) simulations. Using these 50 year simulations we will show and discuss the complex spatial and temporal patterns of human effect on riverine sediment flux and water discharge.
Bayesian population decoding of spiking neurons.
Gerwinn, Sebastian; Macke, Jakob; Bethge, Matthias
2009-01-01
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a 'spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Fabric filter model sensitivity analysis. Final report Jun 1978-Feb 1979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, R.; Klemm, H.A.; Battye, W.
1979-04-01
The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was based on limited comparisons: validation was carried out without regard to optimization of the data inputs selected by the filter users or manufactures. The sensitivity tests involved introducing into the model several hypothetical data inputs that reflect the expected ranges in the principal filter system variables. Such factors as air/cloth ratio, cleaning frequency, amount of cleaning,more » specific resistence coefficient K2, the number of compartments, and inlet concentration were examined in various permutations. A key objective of the tests was to determine the variables that require the greatest accuracy in estimation based on their overall impact on model output. For K2 variations, the system resistance and emission properties showed little change; but the cleaning requirement changed drastically. On the other hand, considerable difference in outlet dust concentration was indicated when the degree of fabric cleaning was varied. To make the findings more useful to persons assessing the probable success of proposed or existing filter systems, much of the data output is presented in graphs or charts.« less
Population density equations for stochastic processes with memory kernels
NASA Astrophysics Data System (ADS)
Lai, Yi Ming; de Kamps, Marc
2017-06-01
We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.
Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2015-01-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
Peak-Seeking Control Using Gradient and Hessian Estimates
NASA Technical Reports Server (NTRS)
Ryan, John J.; Speyer, Jason L.
2010-01-01
A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.
Disinfection by electrohydraulic treatment.
Allen, M; Soike, K
1967-04-28
Electrohydraulic treatment was applied to suspensions of Escherichia coli, spores of Bacillus subtilis var. niger, Saccharomyces cerevisiae, and bacteriophage T2 at an input energy that, in most cases, was below the energy required to sterilize. The input energy was held relatively constant for each of these microorganisms, but the capacitance and voltage were varied. Data are presented which show the degree of disinfection as a function of capacitance and voltage. In all cases, the degree of disinfection for a given input energy increases as both capacitance and voltage are lowered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milkov, Mihail M.
A comparator circuit suitable for use in a column-parallel single-slope analog-to-digital converter comprises a comparator, an input voltage sampling switch, a sampling capacitor arranged to store a voltage which varies with an input voltage when the sampling switch is closed, and a local ramp buffer arranged to buffer a global voltage ramp applied at an input. The comparator circuit is arranged such that its output toggles when the buffered global voltage ramp exceeds the stored voltage. Both DC- and AC-coupled comparator embodiments are disclosed.
Neural tuning matches frequency-dependent time differences between the ears
Benichoux, Victor; Fontaine, Bertrand; Franken, Tom P; Karino, Shotaro; Joris, Philip X; Brette, Romain
2015-01-01
The time it takes a sound to travel from source to ear differs between the ears and creates an interaural delay. It varies systematically with spatial direction and is generally modeled as a pure time delay, independent of frequency. In acoustical recordings, we found that interaural delay varies with frequency at a fine scale. In physiological recordings of midbrain neurons sensitive to interaural delay, we found that preferred delay also varies with sound frequency. Similar observations reported earlier were not incorporated in a functional framework. We find that the frequency dependence of acoustical and physiological interaural delays are matched in key respects. This suggests that binaural neurons are tuned to acoustical features of ecological environments, rather than to fixed interaural delays. Using recordings from the nerve and brainstem we show that this tuning may emerge from neurons detecting coincidences between input fibers that are mistuned in frequency. DOI: http://dx.doi.org/10.7554/eLife.06072.001 PMID:25915620
Membrane Potential Dynamics of CA1 Pyramidal Neurons During Hippocampal Ripples in Awake Mice
Hulse, Brad K.; Moreaux, Laurent C.; Lubenov, Evgueniy V.; Siapas, Athanassios G.
2016-01-01
Ripples are high-frequency oscillations associated with population bursts in area CA1 of the hippocampus that play a prominent role in theories of memory consolidation. While spiking during ripples has been extensively studied, our understanding of the subthreshold behavior of hippocampal neurons during these events remains incomplete. Here, we combine in vivo whole-cell and multisite extracellular recordings to characterize the membrane potential dynamics of identified CA1 pyramidal neurons during ripples. We find that the subthreshold depolarization during ripples is uncorrelated with the net excitatory input to CA1, while the post-ripple hyperpolarization varies proportionately. This clarifies the circuit mechanism keeping most neurons silent during ripples. On a finer time scale, the phase delay between intracellular and extracellular ripple oscillations varies systematically with membrane potential. Such smoothly varying delays are inconsistent with models of intracellular ripple generation involving perisomatic inhibition alone. Instead, they suggest that ripple-frequency excitation leading inhibition shapes intracellular ripple oscillations. PMID:26889811
Li, Ya Ni; Lu, Lei; Liu, Yong
2017-12-01
The tasseled cap triangle (TCT)-leaf area index (LAI) isoline is a model that reflects the distribution of LAI isoline in the spectral space constituted by reflectance of red and near-infrared (NIR) bands, and the LAI retrieval model developed on the basis of this is more accurate than the commonly used statistical relationship models. This study used ground-based measurements of the rice field, validated the applicability of PROSAIL model in simulating canopy reflectance of rice field, and calibrated the input parameters of the model. The ranges of values of PROSAIL input parameters for simulating rice canopy reflectance were determined. Based on this, the TCT-LAI isoline model of rice field was established, and a look-up table (LUT) required in remote sensing retrieval of LAI was developed. Then, the LUT was used for Landsat 8 and WorldView 3 data to retrieve LAI of rice field, respectively. The results showed that the LAI retrieved using the LUT developed from TCT-LAI isoline model had a good linear relationship with the measured LAI R 2 =0.76, RMSE=0.47. Compared with the LAI retrieved from Landsat 8, LAI values retrieved from WorldView 3 va-ried with wider range, and data distribution was more scattered. Resampling the Landsat 8 and WorldView 3 reflectance data to 1 km to retrieve LAI, the result of MODIS LAI product was significantly underestimated compared to that of retrieved LAI.
Sharma, Ram C; Hara, Keitarou; Hirayama, Hidetake
2017-01-01
This paper presents the performance and evaluation of a number of machine learning classifiers for the discrimination between the vegetation physiognomic classes using the satellite based time-series of the surface reflectance data. Discrimination of six vegetation physiognomic classes, Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs, was dealt with in the research. Rich-feature data were prepared from time-series of the satellite data for the discrimination and cross-validation of the vegetation physiognomic types using machine learning approach. A set of machine learning experiments comprised of a number of supervised classifiers with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The performance of each experiment was evaluated by using the 10-fold cross-validation method. Experiment using the Random Forests classifier provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Accuracy metrics were found to be very sensitive to input features and size of ground truth data. The results obtained in the research are expected to be useful for improving the vegetation physiognomic mapping in Japan.
NASA Astrophysics Data System (ADS)
Tsiaras, K. P.; Petihakis, G.; Kourafalou, V. H.; Triantafyllou, G.
2014-02-01
The impact of river load variability on the North Aegean ecosystem functioning over the last decades (1980-2000) was investigated by means of a coupled hydrodynamic/biogeochemical model simulation. Model results were validated against available SeaWiFS Chl-a and in situ data. The simulated food web was found dominated by small cells, in agreement with observations, with most of the carbon channelled through the microbial loop. Diatoms and dinoflagellates presented a higher relative abundance in the more productive coastal areas. The increased phosphate river loads in the early 80s resulted in nitrogen and silicate deficiency in coastal, river-influenced regions. Primary production presented a decreasing trend for most areas. During periods of increased phosphate/nitrate inputs, silicate deficiency resulted in a relative decrease of diatoms, triggering an increase of dinoflagellates. Such an increase was simulated in the late 90s in the Thermaikos Gulf, in agreement with the observed increased occurrence of Harmful Algal Blooms. Microzooplankton was found to closely follow the relative increase of dinoflagellates under higher nutrient availability, showing a faster response than mesozooplankton. Sensitivity simulations with varying nutrient river inputs revealed a linear response of net primary production and plankton biomass. A stronger effect of river inputs was simulated in the enclosed Thermaikos Gulf, in terms of productivity and plankton composition, showing a significant increase of dinoflagellates relative abundance under increased nutrient loads.
Ozbilgin, M.M.; Dickerman, D.C.
1984-01-01
The two-dimensional finite-difference model for simulation of groundwater flow was modified to enable simulation of surface-water/groundwater interactions during periods of low streamflow. Changes were made to the program code in order to calculate surface-water heads for, and flow either to or from, contiguous surface-water bodies; and to allow for more convenient data input. Methods of data input and output were modified and entries (RSORT and HDRIVER) were added to the COEF and CHECKI subroutines to calculate surface-water heads. A new subroutine CALC was added to the program which initiates surface-water calculations. If CALC is not specified as a simulation option, the program runs the original version. The subroutines which solve the ground-water flow equations were not changed. Recharge, evapotranspiration, surface-water inflow, number of wells, pumping rate, and pumping duration can be varied for any time period. The Manning formula was used to relate stream depth and discharge in surface-water streams. Interactions between surface water and ground water are represented by the leakage term in the ground-water flow and surface-water mass balance equations. Documentation includes a flow chart, data deck instructions, input data, output summary, and program listing. Numerical results from the modified program are in good agreement with published analytical results. (USGS)
[Activities of the Department of Electrical Engineering, Howard University
NASA Technical Reports Server (NTRS)
Yalamanchili, Raj C.
1997-01-01
Theoretical derivations, computer analysis and test data are provided to demonstrate that the cavity model is a feasible one to analyze thin-substrate, rectangular-patch microstrip antennas. Seven separate antennas were tested. Most of the antennas were designed to resonate at L-band frequencies (1-2 GHz). One antenna was designed to resonate at an S-band (2-4 GHz) frequency of 2.025 GHz. All dielectric substrates were made of Duroid, and were of varying thicknesses and relative dielectric constant values. Theoretical derivations to calculate radiated free space electromagnetic fields and antenna input impedance were performed. MATHEMATICA 2.2 software was used to generate Smith Chart input impedance plots, normalized relative power radiation plots and to perform other numerical manipulations. Network Analyzer tests were used to verify the data from the computer programming (such as input impedance and VSWR). Finally, tests were performed in an anechoic chamber to measure receive-mode polar power patterns in the E and H planes. Agreement between computer analysis and test data is presented. The antenna with the thickest substrate (e(sub r) = 2.33,62 mils thick) showed the worst match to theoretical impedance data. This is anticipated due to the fact that the cavity model generally loses accuracy when the dielectric substrate thickness exceeds 5% of the antenna's free space wavelength. A method of reducing computer execution time for impedance calculations is also presented.
A hierarchical stress release model for synthetic seismicity
NASA Astrophysics Data System (ADS)
Bebbington, Mark
1997-06-01
We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to use the model for forward prediction.
Akam, Thomas E.; Kullmann, Dimitri M.
2012-01-01
The ‘communication through coherence’ (CTC) hypothesis proposes that selective communication among neural networks is achieved by coherence between firing rate oscillation in a sending region and gain modulation in a receiving region. Although this hypothesis has stimulated extensive work, it remains unclear whether the mechanism can in principle allow reliable and selective information transfer. Here we use a simple mathematical model to investigate how accurately coherent gain modulation can filter a population-coded target signal from task-irrelevant distracting inputs. We show that selective communication can indeed be achieved, although the structure of oscillatory activity in the target and distracting networks must satisfy certain previously unrecognized constraints. Firstly, the target input must be differentiated from distractors by the amplitude, phase or frequency of its oscillatory modulation. When distracting inputs oscillate incoherently in the same frequency band as the target, communication accuracy is severely degraded because of varying overlap between the firing rate oscillations of distracting inputs and the gain modulation in the receiving region. Secondly, the oscillatory modulation of the target input must be strong in order to achieve a high signal-to-noise ratio relative to stochastic spiking of individual neurons. Thus, whilst providing a quantitative demonstration of the power of coherent oscillatory gain modulation to flexibly control information flow, our results identify constraints imposed by the need to avoid interference between signals, and reveal a likely organizing principle for the structure of neural oscillations in the brain. PMID:23144603
NASA Astrophysics Data System (ADS)
Farr, T. G.; Fairbanks, A.
2017-12-01
Recent rains in California caused a pause, and even a reversal in some areas, of the subsidence that has plagued the Central Valley for decades. The 3 main drivers of surface deformation in the Central Valley are: Subsurface hydro-geology, precipitation and surface water deliveries, and groundwater pumping. While the geology is relatively fixed in time, water inputs and outputs vary greatly both in time and space. And while subsurface geology and water inputs are reasonably well-known, information about groundwater pumping amounts and rates is virtually non-existent in California. We have derived regional maps of surface deformation in the region for the period 2006 - present which allow reconstruction of seasonal and long-term changes. In order to understand the spatial and temporal patterns of subsidence and rebound in the Central Valley, we have been compiling information on the geology and water inputs and have attempted to infer pumping rates using maps of fallowed fields and published pumping information derived from hydrological models. In addition, the spatial and temporal patterns of hydraulic head as measured in wells across the region allow us to infer the spatial and temporal patterns of groundwater pumping and recharge more directly. A better understanding of how different areas (overlying different stratigraphy) of the Central Valley respond to water inputs and outputs will allow a predictive capability, potentially defining sustainable pumping rates related to water inputs. * work performed under contract to NASA and the CA Dept. of Water Resources
A third-order class-D amplifier with and without ripple compensation
NASA Astrophysics Data System (ADS)
Cox, Stephen M.; du Toit Mouton, H.
2018-06-01
We analyse the nonlinear behaviour of a third-order class-D amplifier, and demonstrate the remarkable effectiveness of the recently introduced ripple compensation (RC) technique in reducing the audio distortion of the device. The amplifier converts an input audio signal to a high-frequency train of rectangular pulses, whose widths are modulated according to the input signal (pulse-width modulation) and employs negative feedback. After determining the steady-state operating point for constant input and calculating its stability, we derive a small-signal model (SSM), which yields in closed form the transfer function relating (infinitesimal) input and output disturbances. This SSM shows how the RC technique is able to linearise the small-signal response of the device. We extend this SSM through a fully nonlinear perturbation calculation of the dynamics of the amplifier, based on the disparity in time scales between the pulse train and the audio signal. We obtain the nonlinear response of the amplifier to a general audio signal, avoiding the linearisation inherent in the SSM; we thereby more precisely quantify the reduction in distortion achieved through RC. Finally, simulations corroborate our theoretical predictions and illustrate the dramatic deterioration in performance that occurs when the amplifier is operated in an unstable regime. The perturbation calculation is rather general, and may be adapted to quantify the way in which other nonlinear negative-feedback pulse-modulated devices track a time-varying input signal that slowly modulates the system parameters.
Active vibration suppression of self-excited structures using an adaptive LMS algorithm
NASA Astrophysics Data System (ADS)
Danda Roy, Indranil
The purpose of this investigation is to study the feasibility of an adaptive feedforward controller for active flutter suppression in representative linear wing models. The ability of the controller to suppress limit-cycle oscillations in wing models having root springs with freeplay nonlinearities has also been studied. For the purposes of numerical simulation, mathematical models of a rigid and a flexible wing structure have been developed. The rigid wing model is represented by a simple three-degree-of-freedom airfoil while the flexible wing is modelled by a multi-degree-of-freedom finite element representation with beam elements for bending and rod elements for torsion. Control action is provided by one or more flaps attached to the trailing edge and extending along the entire wing span for the rigid model and a fraction of the wing span for the flexible model. Both two-dimensional quasi-steady aerodynamics and time-domain unsteady aerodynamics have been used to generate the airforces in the wing models. An adaptive feedforward controller has been designed based on the filtered-X Least Mean Squares (LMS) algorithm. The control configuration for the rigid wing model is single-input single-output (SISO) while both SISO and multi-input multi-output (MIMO) configurations have been applied on the flexible wing model. The controller includes an on-line adaptive system identification scheme which provides the LMS controller with a reasonably accurate model of the plant. This enables the adaptive controller to track time-varying parameters in the plant and provide effective control. The wing models in closed-loop exhibit highly damped responses at airspeeds where the open-loop responses are destructive. Simulations with the rigid and the flexible wing models in a time-varying airstream show a 63% and 53% increase, respectively, over their corresponding open-loop flutter airspeeds. The ability of the LMS controller to suppress wing store flutter in the two models has also been investigated. With 10% measurement noise introduced in the flexible wing model, the controller demonstrated good robustness to the extraneous disturbances. In the examples studied it is found that adaptation is rapid enough to successfully control flutter at accelerations in the airstream of up to 15 ft/sec2 for the rigid wing model and 9 ft/sec2 for the flexible wing model.
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Neuromuscular mechanisms and neural strategies in the control of time-varying muscle contractions.
Erimaki, Sophia; Agapaki, Orsalia M; Christakos, Constantinos N
2013-09-01
The organization of the neural input to motoneurons that underlies time-varying muscle force is assumed to depend on muscle transfer characteristics and neural strategies or control modes utilizing sensory signals. We jointly addressed these interlinked, but previously studied individually and partially, issues for sinusoidal (range 0.5-5.0 Hz) force-tracking contractions of a human finger muscle. Using spectral and correlation analyses of target signal, force signal, and motor unit (MU) discharges, we studied 1) patterns of such discharges, allowing inferences on the motoneuronal input; 2) transformation of MU population activity (EMG) into quasi-sinusoidal force; and 3) relation of force oscillation to target, carrying information on the input's organization. A broad view of force control mechanisms and strategies emerged. Specifically, synchronized MU and EMG modulations, reflecting a frequency-modulated motoneuronal input, accompanied the force variations. Gain and delay drops between EMG modulation and force oscillation, critical for the appropriate organization of this input, occurred with increasing target frequency. According to our analyses, gain compensation was achieved primarily through rhythmical activation/deactivation of higher-threshold MUs and secondarily through the adaptation of the input's strength expected during tracking tasks. However, the input's timing was not adapted to delay behaviors and seemed to depend on the control modes employed. Thus, for low-frequency targets, the force oscillation was highly coherent with, but led, a target, this timing error being compatible with predictive feedforward control partly based on the target's derivatives. In contrast, the force oscillation was weakly coherent, but in phase, with high-frequency targets, suggesting control mainly based on a target's rhythm.
Hamdan, Sadeque; Cheaitou, Ali
2017-08-01
This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.
Elizabeth Hagen; Matthew McTammany; Jackson Webster; Ernest Benfield
2010-01-01
Relative contributions of allochthonous inputs and autochthonous production vary depending on terrestrial land use and biome. Terrestrially derived organic matter and in-stream primary production were measured in 12 headwater streams along an agricultural land-use gradient. Streams were examined to see how carbon (C) supply shifts from forested streams receiving...
Optimal filtering and Bayesian detection for friction-based diagnostics in machines.
Ray, L R; Townsend, J R; Ramasubramanian, A
2001-01-01
Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.
Identification of gene regulation models from single-cell data
NASA Astrophysics Data System (ADS)
Weber, Lisa; Raymond, William; Munsky, Brian
2018-09-01
In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweetser, John David
2013-10-01
This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less
Data Services in Support of High Performance Computing-Based Distributed Hydrologic Models
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Gichamo, T.; Yildirim, A. A.; Jones, N.
2014-12-01
We have developed web-based data services to support the application of hydrologic models on High Performance Computing (HPC) systems. The purposes of these services are to provide hydrologic researchers, modelers, water managers, and users access to HPC resources without requiring them to become HPC experts and understanding the intrinsic complexities of the data services, so as to reduce the amount of time and effort spent in finding and organizing the data required to execute hydrologic models and data preprocessing tools on HPC systems. These services address some of the data challenges faced by hydrologic models that strive to take advantage of HPC. Needed data is often not in the form needed by such models, requiring researchers to spend time and effort on data preparation and preprocessing that inhibits or limits the application of these models. Another limitation is the difficult to use batch job control and queuing systems used by HPC systems. We have developed a REST-based gateway application programming interface (API) for authenticated access to HPC systems that abstracts away many of the details that are barriers to HPC use and enhances accessibility from desktop programming and scripting languages such as Python and R. We have used this gateway API to establish software services that support the delineation of watersheds to define a modeling domain, then extract terrain and land use information to automatically configure the inputs required for hydrologic models. These services support the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation and generation of hydrology-based terrain information such as wetness index and stream networks. These services also support the derivation of inputs for the Utah Energy Balance snowmelt model used to address questions such as how climate, land cover and land use change may affect snowmelt inputs to runoff generation. To enhance access to the time varying climate data used to drive hydrologic models, we have developed services to downscale and re-grid nationally available climate analysis data from systems such as NLDAS and MERRA. These cases serve as examples for how this approach can be extended to other models to enhance the use of HPC for hydrologic modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.W.; Phillips, A.M.
1990-02-01
Low-permeability reservoirs are currently being propped with sand, resin-coated sand, intermediate-density proppants, and bauxite. This wide range of proppant cost and performance has resulted in the proliferation of proppant selection models. Initially, a rather vague relationship between well depth and proppant strength dictated the choice of proppant. More recently, computerized models of varying complexity that use net-present-value (NPV) calculations have become available. The input is based on the operator's performance goals for each well and specific reservoir properties. Simpler, noncomputerized approaches include cost/performance comparisons and nomographs. Each type of model, including several of the computerized models, is examined here. Bymore » use of these models and NPV calculations, optimum fracturing treatment designs have been developed for such low-permeability reservoirs as the Prue in Oklahoma. Typical well conditions are used in each of the selection models, and the results are compared.« less
NASA Astrophysics Data System (ADS)
Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.
2018-02-01
This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.
A Physiologically Based Model of Orexinergic Stabilization of Sleep and Wake
Fulcher, Ben D.; Phillips, Andrew J. K.; Postnova, Svetlana; Robinson, Peter A.
2014-01-01
The orexinergic neurons of the lateral hypothalamus (Orx) are essential for regulating sleep-wake dynamics, and their loss causes narcolepsy, a disorder characterized by severe instability of sleep and wake states. However, the mechanisms through which Orx stabilize sleep and wake are not well understood. In this work, an explanation of the stabilizing effects of Orx is presented using a quantitative model of important physiological connections between Orx and the sleep-wake switch. In addition to Orx and the sleep-wake switch, which is composed of mutually inhibitory wake-active monoaminergic neurons in brainstem and hypothalamus (MA) and the sleep-active ventrolateral preoptic neurons of the hypothalamus (VLPO), the model also includes the circadian and homeostatic sleep drives. It is shown that Orx stabilizes prolonged waking episodes via its excitatory input to MA and by relaying a circadian input to MA, thus sustaining MA firing activity during the circadian day. During sleep, both Orx and MA are inhibited by the VLPO, and the subsequent reduction in Orx input to the MA indirectly stabilizes sustained sleep episodes. Simulating a loss of Orx, the model produces dynamics resembling narcolepsy, including frequent transitions between states, reduced waking arousal levels, and a normal daily amount of total sleep. The model predicts a change in sleep timing with differences in orexin levels, with higher orexin levels delaying the normal sleep episode, suggesting that individual differences in Orx signaling may contribute to chronotype. Dynamics resembling sleep inertia also emerge from the model as a gradual sleep-to-wake transition on a timescale that varies with that of Orx dynamics. The quantitative, physiologically based model developed in this work thus provides a new explanation of how Orx stabilizes prolonged episodes of sleep and wake, and makes a range of experimentally testable predictions, including a role for Orx in chronotype and sleep inertia. PMID:24651580
NASA Astrophysics Data System (ADS)
Young, K. S.; Beganskas, S.; Fisher, A. T.
2015-12-01
We apply a USGS surface hydrology model, Precipitation-Runoff Modeling System (PRMS), to analyze stormwater runoff in Santa Cruz and Northern Monterey Counties, CA with the goal of supplying managed aquifer recharge (MAR) sites. Under the combined threats of multiyear drought and excess drawdown, this region's aquifers face numerous sustainability challenges, including seawater intrusion, chronic overdraft, increased contamination, and subsidence. This study addresses the supply side of this resource issue by increasing our knowledge of the spatial and temporal dynamics of runoff that could provide water for MAR. Ensuring the effectiveness of MAR using stormwater requires a thorough understanding of runoff distribution and site-specific surface and subsurface aquifer conditions. In this study we use a geographic information system (GIS) and a 3-m digital elevation model (DEM) to divide the region's four primary watersheds into Hydrologic Response Units (HRUs), or topographic sub-basins, that serve as discretized input cells for PRMS. We then assign vegetation, soil, land use, slope, aspect, and other characteristics to these HRUs, from a variety of data sources, and analyze runoff spatially using PRMS under varying precipitation conditions. We are exploring methods of linking spatially continuous and high-temporal-resolution precipitation datasets to generate input precipitation catalogs, facilitating analyses of a variety of regimes. To gain an understanding of how surface hydrology has responded to land development, we will also modify our input data to represent pre-development conditions. Coupled with a concurrent MAR suitability analysis, our model results will help screen for locations of future MAR projects and will improve our understanding of how changes in land use and climate impact hydrologic runoff and aquifer recharge.
Straver, J M; Janssen, A F W; Linnemann, A R; van Boekel, M A J S; Beumer, R R; Zwietering, M H
2007-09-01
This study aimed to characterize the number of Salmonella on chicken breast filet at the retail level and to evaluate if this number affects the risk of salmonellosis. From October to December 2005, 220 chilled raw filets (without skin) were collected from five local retail outlets in The Netherlands. Filet rinses that were positive after enrichment were enumerated with a three-tube most-probable-number (MPN) assay. Nineteen filets (8.6%) were contaminated above the detection limit of the MPN method (10 Salmonella per filet). The number of Salmonella on positive filets varied from 1 to 3.81 log MPN per filet. The obtained enumeration data were applied in a risk assessment model. The model considered possible growth during domestic storage, cross-contamination from filet via a cutting board to lettuce, and possible illness due to consumption of the prepared lettuce. A screening analysis with expected-case and worst-case estimates for the input values of the model showed that variability in the inputs was of relevance. Therefore, a Monte Carlo simulation with probability distributions for the inputs was carried out to predict the annual number of illnesses. Remarkably, over two-thirds of annual predicted illnesses were caused by the small fraction of filets containing more than 3 log Salmonella at retail (0.8% of all filets). The enumeration results can be used to confirm this hypothesis in a more elaborate risk assessment. Modeling of the supply chain can provide insight for possible intervention strategies to reduce the incidence of rare, but extreme levels. Reduction seems feasible within current practices, because the retail market study indicated a significant difference between suppliers.
NASA Astrophysics Data System (ADS)
Hasbullah, Faried; Faris, Waleed F.
2017-12-01
In recent years, Active Disturbance Rejection Control (ADRC) has become a popular control alternative due to its easy applicability and robustness to varying processes. In this article, ADRC with input decoupling transformation (ADRC-IDT) is proposed to improve ride comfort of a vehicle with an active suspension system using half-car model. The ride performance of the ADRC-IDT is evaluated and compared with decentralized ADRC control as well as the passive system. Simulation results show that both ADRC and ADRC-IDT manage to appreciably reduce body accelerations and able to cope well with varying conditions typically encountered in an active suspension system. Also, it is sufficient to control only the body motions with both active controllers to improve ride comfort while maintaining good road holding and small suspension working space.
NASA Technical Reports Server (NTRS)
Wickens, C.; Gill, R.; Kramer, A.; Ross, W.; Donchin, E.
1981-01-01
Three experiments are described in which tracking difficulty is varied in the presence of a covert tone discrimination task. Event related brain potentials (ERPs) elicited by the tones are employed as an index of the resource demands of tracking. The ERP measure reflected the control order variation, and this variable was thereby assumed to compete for perceptual/central processing resources. A fine-grained analysis of the results suggested that the primary demands of second order tracking involve the central processing operations of maintaining a more complex internal model of the dynamic system, rather than the perceptual demands of higher derivative perception. Experiment 3 varied tracking bandwidth in random input tracking, and the ERP was unaffected. Bandwidth was then inferred to compete for response-related processing resources that are independent of the ERP.
Exploring the effect of East Antarctic ice mass loss on GIA-induced horizontal bedrock motions
NASA Astrophysics Data System (ADS)
Konfal, S. A.; Whitehouse, P. L.; Hermans, T.; van der Wal, W.; Wilson, T. J.; Bevis, M. G.; Kendrick, E. C.; Dalziel, I.; Smalley, R., Jr.
2017-12-01
Ice history inputs used in Antarctic models of GIA include major centers of ice mass loss in West Antarctica. In the Transantarctic Mountains (TAM) region spanning the boundary between East and West Antarctica, horizontal crustal motions derived from GPS observations from the Antarctic Network (ANET) component of the Polar Earth Observing Network (POLENET) are towards these West Antarctic ice mass centers, opposite to the pattern of radial crustal motion expected in an unloading scenario. We investigate alternative ice history and earth structure inputs to GIA models in an attempt to reproduce observed crustal motions in the region. The W12 ice history model is altered to create scenarios including ice unloading in the Wilkes Subglacial Basin based on available glaciological records. These altered ice history models, along with the unmodified W12 ice history model, are coupled with 60 radially varying (1D) earth model combinations, including approximations of optimal earth profiles identified in published GIA models. The resulting model-predicted motions utilizing both the modified and unmodified ice history models fit ANET GPS-derived crustal motions in the northern TAM region for a suite of earth model combinations. Further south, where the influence of simulated Wilkes unloading is weakest and West Antarctic unloading is strongest, observed and predicted motions do not agree. The influence of simulated Wilkes ice unloading coupled with laterally heterogeneous earth models is also investigated. The resulting model-predicted motions do not differ significantly between the original W12 and W12 with simulated Wilkes unloading ice histories.
Modeling the reversible, diffusive sink effect in response to transient contaminant sources.
Zhao, D; Little, J C; Hodgson, A T
2002-09-01
A physically based diffusion model is used to evaluate the sink effect of diffusion-controlled indoor materials and to predict the transient contaminant concentration in indoor air in response to several time-varying contaminant sources. For simplicity, it is assumed the predominant indoor material is a homogeneous slab, initially free of contaminant, and the air within the room is well mixed. The model enables transient volatile organic compound (VOC) concentrations to be predicted based on the material/air partition coefficient (K) and the material-phase diffusion coefficient (D) of the sink. Model predictions are made for three scenarios, each mimicking a realistic situation in a building. Styrene, phenol, and naphthalene are used as representative VOCs. A styrene butadiene rubber (SBR) backed carpet, vinyl flooring (VF), and a polyurethane foam (PUF) carpet cushion are considered as typical indoor sinks. In scenarios involving a sinusoidal VOC input and a double exponential decaying input, the model predicts the sink has a modest impact for SBR/styrene, but the effect increases for VF/phenol and PUF/naphthalene. In contrast, for an episodic chemical spill, SBR is predicted to reduce the peak styrene concentration considerably. A parametric study reveals for systems involving a large equilibrium constant (K), the kinetic constant (D) will govern the shape of the resulting gasphase concentration profile. On the other hand, for systems with a relaxed mass transfer resistance, K will dominate the profile.
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Niemeyer, Frank; Simon, Ulrich; Wehner, Tim
2013-09-06
Numerical models of secondary fracture healing are based on mechanoregulatory algorithms that use distortional strain alone or in combination with either dilatational strain or fluid velocity as determining stimuli for tissue differentiation and development. Comparison of these algorithms has previously suggested that healing processes under torsional rotational loading can only be properly simulated by considering fluid velocity and deviatoric strain as the regulatory stimuli. We hypothesize that sufficient calibration on uncertain input parameters will enhance our existing model, which uses distortional and dilatational strains as determining stimuli, to properly simulate fracture healing under various loading conditions including also torsional rotation. Therefore, we minimized the difference between numerically simulated and experimentally measured courses of interfragmentary movements of two axial compressive cases and two shear load cases (torsional and translational) by varying several input parameter values within their predefined bounds. The calibrated model was then qualitatively evaluated on the ability to predict physiological changes of spatial and temporal tissue distributions, based on respective in vivo data. Finally, we corroborated the model on five additional axial compressive and one asymmetrical bending load case. We conclude that our model, using distortional and dilatational strains as determining stimuli, is able to simulate fracture-healing processes not only under axial compression and torsional rotation but also under translational shear and asymmetrical bending loading conditions.
Modeling road-cycling performance.
Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S
1995-04-01
This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-04-07
To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.
Halasa, Tariq; Bøtner, Anette; Mortensen, Sten; Christensen, Hanne; Toft, Nils; Boklund, Anette
2016-09-25
African swine fever (ASF) is a notifiable infectious disease with a considerable impact on animal health and is currently one of the most important emerging diseases of domestic pigs. ASF was introduced into Georgia in 2007 and subsequently spread to the Russian Federation and several Eastern European countries. Consequently, there is a non-negligible risk of ASF spread towards Western Europe. Therefore it is important to develop tools to improve our understanding of the spread and control of ASF for contingency planning. A stochastic and dynamic spatial spread model (DTU-DADS) was adjusted to simulate the spread of ASF virus between domestic swine herds exemplified by the Danish swine population. ASF was simulated to spread via animal movement, low- or medium-risk contacts and local spread. Each epidemic was initiated in a randomly selected herd - either in a nucleus herd, a sow herd, a randomly selected herd or in multiple herds simultaneously. A sensitivity analysis was conducted on input parameters. Given the inputs and assumptions of the model, epidemics of ASF in Denmark are predicted to be small, affecting about 14 herds in the worst-case scenario. The duration of an epidemic is predicted to vary from 1 to 76days. Substantial economic damages are predicted, with median direct costs and export losses of €12 and €349 million, respectively, when epidemics were initiated in multiple herds. Each infectious herd resulted in 0 to 2 new infected herds varying from 0 to 5 new infected herds, depending on the index herd type. Copyright © 2016 Elsevier B.V. All rights reserved.
Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blankenship, Doug; Sonnenthal, Eric
Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.
Ogburn, Sarah E.; Calder, Eliza S
2017-01-01
High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture the lateral spreading in distal regions of larger-volume flows. Both models are better at reproducing the inundated area of single-pulse, valley-confined, smaller-volume flows than sustained, highly unsteady, larger-volume flows, which are often partially unchannelized. The simple rheological models of TITAN2D and VolcFlow are not able to recreate all features of these more complex flows. LAHARZ is fast to run and can give a rough approximation of inundation, but may not be appropriate for all PDCs and the designation of starting locations is difficult. The ΔH/L cone model is also very quick to run and gives reasonable approximations of runout distance, but does not inherently model flow channelization or directionality and thus unrealistically covers all interfluves. Empirically-based models like LAHARZ and ΔH/L cones can be quick, first-approximations of flow runout, provided a database of similar flows, e.g., FlowDat, is available to properly calculate coefficients or ΔH/L. For hazard assessment purposes, geophysical models like TITAN2D and VolcFlow can be useful for producing both scenario-based or probabilistic hazard maps, but must be run many times with varying input parameters. LAHARZ and ΔH/L cones can be used to produce simple modeling-based hazard maps when run with a variety of input volumes, but do not explicitly consider the probability of occurrence of different volumes. For forward modeling purposes, the ability to derive potential input parameters from global or local databases is crucial, though important input parameters for VolcFlow cannot be empirically estimated. Not only does this work provide a useful comparison of the operational aspects and behavior of various models for hazard assessment, but it also enriches conceptual understanding of the dynamics of the PDCs themselves.
High-resolution DEM Effects on Geophysical Flow Models
NASA Astrophysics Data System (ADS)
Williams, M. R.; Bursik, M. I.; Stefanescu, R. E. R.; Patra, A. K.
2014-12-01
Geophysical mass flow models are numerical models that approximate pyroclastic flow events and can be used to assess the volcanic hazards certain areas may face. One such model, TITAN2D, approximates granular-flow physics based on a depth-averaged analytical model using inputs of basal and internal friction, material volume at a coordinate point, and a GIS in the form of a digital elevation model (DEM). The volume of modeled material propagates over the DEM in a way that is governed by the slope and curvature of the DEM surface and the basal and internal friction angles. Results from TITAN2D are highly dependent upon the inputs to the model. Here we focus on a single input: the DEM, which can vary in resolution. High resolution DEMs are advantageous in that they contain more surface details than lower-resolution models, presumably allowing modeled flows to propagate in a way more true to the real surface. However, very high resolution DEMs can create undesirable artifacts in the slope and curvature that corrupt flow calculations. With high-resolution DEMs becoming more widely available and preferable for use, determining the point at which high resolution data is less advantageous compared to lower resolution data becomes important. We find that in cases of high resolution, integer-valued DEMs, very high-resolution is detrimental to good model outputs when moderate-to-low (<10-15°) slope angles are involved. At these slope angles, multiple adjacent DEM cell elevation values are equal due to the need for the DEM to approximate the low slope with a limited set of integer values for elevation. The first derivative of the elevation surface thus becomes zero. In these cases, flow propagation is inhibited by these spurious zero-slope conditions. Here we present evidence for this "terracing effect" from 1) a mathematically defined simulated elevation model, to demonstrate the terracing effects of integer valued data, and 2) a real-world DEM where terracing must be addressed. We discuss the effect on the flow model output and present possible solutions for rectification of the problem.
Grande, Giovanbattista; Bui, Tuan V; Rose, P Ken
2007-06-01
In the presence of monoamines, L-type Ca(2+) channels on the dendrites of motoneurons contribute to persistent inward currents (PICs) that can amplify synaptic inputs two- to sixfold. However, the exact location of the L-type Ca(2+) channels is controversial, and the importance of the location as a means of regulating the input-output properties of motoneurons is unknown. In this study, we used a computational strategy developed previously to estimate the dendritic location of the L-type Ca(2+) channels and test the hypothesis that the location of L-type Ca(2+) channels varies as a function of motoneuron size. Compartmental models were constructed based on dendritic trees of five motoneurons that ranged in size from small to large. These models were constrained by known differences in PIC activation reported for low- and high-conductance motoneurons and the relationship between somatic PIC threshold and the presence or absence of tonic excitatory or inhibitory synaptic activity. Our simulations suggest that L-type Ca(2+) channels are concentrated in hotspots whose distance from the soma increases with the size of the dendritic tree. Moving the hotspots away from these sites (e.g., using the hotspot locations from large motoneurons on intermediate-sized motoneurons) fails to replicate the shifts in PIC threshold that occur experimentally during tonic excitatory or inhibitory synaptic activity. In models equipped with a size-dependent distribution of L-type Ca(2+) channels, the amplification of synaptic current by PICs depends on motoneuron size and the location of the synaptic input on the dendritic tree.
NASA Astrophysics Data System (ADS)
Ward, N. K.; Maureira, F.; Yourek, M. A.; Brooks, E. S.; Stockle, C. O.
2014-12-01
The current use of synthetic nitrogen fertilizers in agriculture has many negative environmental and economic costs, necessitating improved nitrogen management. In the highly heterogeneous landscape of the Palouse region in eastern Washington and northern Idaho, crop nitrogen needs vary widely within a field. Site-specific nitrogen management is a promising strategy to reduce excess nitrogen lost to the environment while maintaining current yields by matching crop needs with inputs. This study used in-situ hydrologic, nutrient, and crop yield data from a heavily instrumented field site in the high precipitation zone of the wheat-producing Palouse region to assess the performance of the MicroBasin model. MicroBasin is a high-resolution watershed-scale ecohydrologic model with nutrient cycling and cropping algorithms based on the CropSyst model. Detailed soil mapping conducted at the site was used to parameterize the model and the model outputs were evaluated with observed measurements. The calibrated MicroBasin model was then used to evaluate the impact of various nitrogen management strategies on crop yield and nitrate losses. The strategies include uniform application as well as delineating the field into multiple zones of varying nitrogen fertilizer rates to optimize nitrogen use efficiency. We present how coupled modeling and in-situ data sets can inform agricultural management and policy to encourage improved nitrogen management.
Dynamic control modification techniques in teleoperation of a flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Magee, David Patrick
1991-01-01
The objective of this research is to reduce the end-point vibration of a large, teleoperated manipulator while preserving the usefulness of the system motion. A master arm is designed to measure desired joint angles as the user specifies a desired tip motion. The desired joint angles from the master arm are the inputs to an adaptive PD control algorithm that positions the end-point of the manipulator. As the user moves the tip of the master, the robot will vibrate at its natural frequencies which makes it difficult to position the end-point. To eliminate the tip vibration during teleoperated motions, an input shaping method is presented. The input shaping method transforms each sample of the desired input into a new set of impulses that do not excite the system resonances. The method is explained using the equation of motion for a simple, second-order system. The impulse response of such a system is derived and the constraint equations for vibrationless motion are presented. To evaluate the robustness of the method, a different residual vibration equation from Singer's is derived that more accurately represents the input shaping technique. The input shaping method is shown to actually increase the residual vibration in certain situations when the system parameters are not accurately specified. Finally, the implementation of the input shaping method to a system with varying parameters is shown to induce a vibration into the system. To eliminate this vibration, a modified command shaping technique is developed. The ability of the modified command shaping method to reduce vibration at the system resonances is tested by varying input perturbations to trajectories in a range of possible user inputs. By comparing the frequency responses of the transverse acceleration at the end-point of the manipulator, the modified method is compared to the original PD routine. The control scheme that produces the smaller magnitude of resonant vibration at the first natural frequency is considered the more effective control method.
Dynamic modeling of neuronal responses in fMRI using cubature Kalman filtering.
Havlicek, Martin; Friston, Karl J; Jan, Jiri; Brazdil, Milan; Calhoun, Vince D
2011-06-15
This paper presents a new approach to inverting (fitting) models of coupled dynamical systems based on state-of-the-art (cubature) Kalman filtering. Crucially, this inversion furnishes posterior estimates of both the hidden states and parameters of a system, including any unknown exogenous input. Because the underlying generative model is formulated in continuous time (with a discrete observation process) it can be applied to a wide variety of models specified with either ordinary or stochastic differential equations. These are an important class of models that are particularly appropriate for biological time-series, where the underlying system is specified in terms of kinetics or dynamics (i.e., dynamic causal models). We provide comparative evaluations with generalized Bayesian filtering (dynamic expectation maximization) and demonstrate marked improvements in accuracy and computational efficiency. We compare the schemes using a series of difficult (nonlinear) toy examples and conclude with a special focus on hemodynamic models of evoked brain responses in fMRI. Our scheme promises to provide a significant advance in characterizing the functional architectures of distributed neuronal systems, even in the absence of known exogenous (experimental) input; e.g., resting state fMRI studies and spontaneous fluctuations in electrophysiological studies. Importantly, unlike current Bayesian filters (e.g. DEM), our scheme provides estimates of time-varying parameters, which we will exploit in future work on the adaptation and enabling of connections in the brain. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Johannsen, G.; Govindaraj, T.
1980-01-01
The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.
An empirical model of the tidal currents in the Gulf of the Farallones
Steger, J.M.; Collins, C.A.; Schwing, F.B.; Noble, M.; Garfield, N.; Steiner, M.T.
1998-01-01
Candela et al. (1990, 1992) showed that tides in an open ocean region can be resolved using velocity data from a ship-mounted ADCP. We use their method to build a spatially varying model of the tidal currents in the Gulf of the Farallones, an area of complicated bathymetry where the tidal velocities in some parts of the region are weak compared to the mean currents. We describe the tidal fields for the M2, S2, K1, and O1 constituents and show that this method is sensitive to the model parameters and the quantity of input data. In areas with complex bathymetry and tidal structures, a large amount of spatial data is needed to resolve the tides. A method of estimating the associated errors inherent in the model is described.
The 3D modeling of high numerical aperture imaging in thin films
NASA Technical Reports Server (NTRS)
Flagello, D. G.; Milster, Tom
1992-01-01
A modelling technique is described which is used to explore three dimensional (3D) image irradiance distributions formed by high numerical aperture (NA is greater than 0.5) lenses in homogeneous, linear films. This work uses a 3D modelling approach that is based on a plane-wave decomposition in the exit pupil. Each plane wave component is weighted by factors due to polarization, aberration, and input amplitude and phase terms. This is combined with a modified thin-film matrix technique to derive the total field amplitude at each point in a film by a coherent vector sum over all plane waves. Then the total irradiance is calculated. The model is used to show how asymmetries present in the polarized image change with the influence of a thin film through varying degrees of focus.
NASA Astrophysics Data System (ADS)
Gladstone, Rupert M.; Lee, Victoria; Rougier, Jonathan; Payne, Antony J.; Hellmer, Hartmut; Le Brocq, Anne; Shepherd, Andrew; Edwards, Tamsin L.; Gregory, Jonathan; Cornford, Stephen L.
2012-06-01
A flowline ice sheet model is coupled to a box model for cavity circulation and configured for the Pine Island Glacier. An ensemble of 5000 simulations are carried out from 1900 to 2200 with varying inputs and parameters, forced by ocean temperatures predicted by a regional ocean model under the A1B ‘business as usual’ emissions scenario. Comparison is made against recent observations to provide a calibrated prediction in the form of a 95% confidence set. Predictions are for monotonic (apart from some small scale fluctuations in a minority of cases) retreat of the grounding line over the next 200 yr with huge uncertainty in the rate of retreat. Full collapse of the main trunk of the PIG during the 22nd century remains a possibility.
Enabling full-field physics-based optical proximity correction via dynamic model generation
NASA Astrophysics Data System (ADS)
Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas
2017-07-01
As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.
Mapping risk of avian influenza transmission at the interface of domestic poultry and wild birds
Prosser, Diann J.; Hungerford, Laura L.; Erwin, R. Michael; Ottinger, Mary Ann; Takekawa, John Y.; Ellis, Erle C.
2013-01-01
Emergence of avian influenza viruses with high lethality to humans, such as the currently circulating highly pathogenic A(H5N1) (emerged in 1996) and A(H7N9) cause serious concern for the global economic and public health sectors. Understanding the spatial and temporal interface between wild and domestic populations, from which these viruses emerge, is fundamental to taking action. This information, however, is rarely considered in influenza risk models, partly due to a lack of data. We aim to identify areas of high transmission risk between domestic poultry and wild waterfowl in China, the epicenter of both viruses. Two levels of models were developed: one that predicts hotspots of novel virus emergence between domestic and wild birds, and one that incorporates H5N1 risk factors, for which input data exists. Models were produced at 1 and 30 km spatial resolution, and two temporal seasons. Patterns of risk varied between seasons with higher risk in the northeast, central-east, and western regions of China during spring and summer, and in the central and southeastern regions during winter. Monte-Carlo uncertainty analyses indicated varying levels of model confidence, with lowest errors in the densely populated regions of eastern and southern China. Applications and limitations of the models are discussed within.
Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2006-01-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Adaptive Optimal Stochastic State Feedback Control of Resistive Wall Modes in Tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2007-06-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least square method with exponential forgetting factor and covariance resetting is used to identify the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
NASA Astrophysics Data System (ADS)
Boyer, E. W.; Goodale, C. L.; Howarth, R. W.; VanBreemen, N.
2001-12-01
Inputs of nitrogen (N) to aquatic and terrestrial ecosystems have increased during recent decades, primarily from the production and use of fertilizers, the planting of N-fixing crops, and the combustion of fossil fuels. We present mass-balanced budgets of N for 16 catchments along a latitudinal profile from Maine to Virginia, which encompass a range of climatic variability and are major drainages to the coast of the North Atlantic Ocean. We quantify inputs of N to each catchment from atmospheric deposition, application of nitrogenous fertilizers, biological nitrogen fixation by crops and trees, and import of N in agricultural products (food and feed). We relate these input terms to losses of N (total, organic, and nitrate) in streamflow. The importance of the relative N sources to N exports varies widely by watershed and is related to land use. Atmospheric deposition was the largest source of N to the forested catchments of northern New England (e.g., Penobscot and Kennebec); import of N in food was the largest source of N to the more populated regions of southern New England (e.g., Charles and Blackstone); and agricultural inputs were the dominant N sources in the Mid-Atlantic region (e.g., Schuylkill and Potomac). In all catchments, N inputs greatly exceed outputs, implying additional loss terms (e.g., denitrification or volatilization and transport of animal wastes), or changes in internal N stores (e.g, accumulation of N in vegetation, soil, or groundwater). We use our N budgets and several modeling approaches to constrain estimates about the fate of this excess N, including estimates of N storage in accumulating woody biomass, N losses due to in-stream denitrification, and more. This work is an effort of the SCOPE Nitrogen Project.
Conversion of Phase Information into a Spike-Count Code by Bursting Neurons
Samengo, Inés; Montemurro, Marcelo A.
2010-01-01
Single neurons in the cerebral cortex are immersed in a fluctuating electric field, the local field potential (LFP), which mainly originates from synchronous synaptic input into the local neural neighborhood. As shown by recent studies in visual and auditory cortices, the angular phase of the LFP at the time of spike generation adds significant extra information about the external world, beyond the one contained in the firing rate alone. However, no biologically plausible mechanism has yet been suggested that allows downstream neurons to infer the phase of the LFP at the soma of their pre-synaptic afferents. Therefore, so far there is no evidence that the nervous system can process phase information. Here we study a model of a bursting pyramidal neuron, driven by a time-dependent stimulus. We show that the number of spikes per burst varies systematically with the phase of the fluctuating input at the time of burst onset. The mapping between input phase and number of spikes per burst is a robust response feature for a broad range of stimulus statistics. Our results suggest that cortical bursting neurons could play a crucial role in translating LFP phase information into an easily decodable spike count code. PMID:20300632
Processing oscillatory signals by incoherent feedforward loops
NASA Astrophysics Data System (ADS)
Zhang, Carolyn; Wu, Feilun; Tsoi, Ryan; Shats, Igor; You, Lingchong
From the timing of amoeba development to the maintenance of stem cell pluripotency,many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression.While networks underlying this signal decoding are diverse,many are built around a common motif, the incoherent feedforward loop (IFFL),where an input simultaneously activates an output and an inhibitor of the output.With appropriate parameters,this motif can generate temporal adaptation,where the system is desensitized to a sustained input.This property serves as the foundation for distinguishing signals with varying temporal profiles.Here,we use quantitative modeling to examine another property of IFFLs,the ability to process oscillatory signals.Our results indicate that the system's ability to translate pulsatile dynamics is limited by two constraints.The kinetics of IFFL components dictate the input range for which the network can decode pulsatile dynamics.In addition,a match between the network parameters and signal characteristics is required for optimal ``counting''.We elucidate one potential mechanism by which information processing occurs in natural networks with implications in the design of synthetic gene circuits for this purpose. This work was partially supported by the National Science Foundation Graduate Research Fellowship (CZ).
Homeostasis in a feed forward loop gene regulatory motif.
Antoneli, Fernando; Golubitsky, Martin; Stewart, Ian
2018-05-14
The internal state of a cell is affected by inputs from the extra-cellular environment such as external temperature. If some output, such as the concentration of a target protein, remains approximately constant as inputs vary, the system exhibits homeostasis. Special sub-networks called motifs are unusually common in gene regulatory networks (GRNs), suggesting that they may have a significant biological function. Potentially, one such function is homeostasis. In support of this hypothesis, we show that the feed-forward loop GRN produces homeostasis. Here the inputs are subsumed into a single parameter that affects only the first node in the motif, and the output is the concentration of a target protein. The analysis uses the notion of infinitesimal homeostasis, which occurs when the input-output map has a critical point (zero derivative). In model equations such points can be located using implicit differentiation. If the second derivative of the input-output map also vanishes, the critical point is a chair: the output rises roughly linearly, then flattens out (the homeostasis region or plateau), and then starts to rise again. Chair points are a common cause of homeostasis. In more complicated equations or networks, numerical exploration would have to augment analysis. Thus, in terms of finding chairs, this paper presents a proof of concept. We apply this method to a standard family of differential equations modeling the feed-forward loop GRN, and deduce that chair points occur. This function determines the production of a particular mRNA and the resulting chair points are found analytically. The same method can potentially be used to find homeostasis regions in other GRNs. In the discussion and conclusion section, we also discuss why homeostasis in the motif may persist even when the rest of the network is taken into account. Copyright © 2018 Elsevier Ltd. All rights reserved.
Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred
2016-01-01
Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.
Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network
NASA Technical Reports Server (NTRS)
Kuhn, D. Richard; Kacker, Raghu; Lei, Yu
2010-01-01
This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.
Álvarez-Iglesias, P; Rubio, B; Millos, J
2012-10-15
San Simón Bay, the inner part of the Ría de Vigo (NW Spain), an area previously identified as highly polluted by Pb, was selected for the application of Pb stable isotope ratios as a fingerprinting tool in subtidal and intertidal sediment cores. Lead isotopic ratios were determined by inductively coupled plasma mass spectrometry on extracts from bulk samples after total acid digestion. Depth-wise profiles of (206)Pb/(207)Pb, (206)Pb/(204)Pb, (207)Pb/(204)Pb, (208)Pb/(204)Pb and (208)Pb/(207)Pb ratios showed, in general, an upward decrease for both intertidal and subtidal sediments as a consequence of the anthropogenic activities over the last century, or centuries. Waste channel samples from a nearby ceramic factory showed characteristic Pb stable isotope ratios different from those typical of coal and petrol. Natural isotope ratios from non-polluted samples were established for the study area, differentiating sediments from granitic or schist-gneiss sources. A binary mixing model employed on the polluted samples allowed estimating the anthropogenic inputs to the bay. These inputs represented between 25 and 98% of Pb inputs in intertidal samples, and 9-84% in subtidal samples, their contributions varying with time. Anthropogenic sources were apportioned according to a three-source model. Coal combustion-related emissions were the main anthropogenic source Pb to the bay (60-70%) before the establishment of the ceramic factory in the area (in the 1970s) which has since constituted the main source (95-100%), followed by petrol-related emissions. The Pb inputs history for the intertidal area was determined for the 20th century, and, for the subtidal area, the 19th and 20th centuries. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring.
Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing
2018-08-01
Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chapman, Martin Colby
1998-12-01
The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression modeling does not resolve significant effects due to site class at frequencies greater than approximately 5 Hz. Disaggregation of general seismic hazard models using Vsbea indicates that the modal magnitudes for the higher frequency oscillators tend to be larger, and vary less with oscillator frequency, than those derived using PSV. Insofar as the elastic input energy may be a better parameter for quantifying the damage potential of ground motion, its use in probabilistic seismic hazard analysis could provide an improved means for selecting earthquake scenarios and establishing design earthquakes for many types of engineering analyses.
Witter, Robert C.; Zhang, Yinglong J.; Wang, Kelin; Priest, George R.; Goldfinger, Chris; Stimely, Laura; English, John T.; Ferro, Paul A.
2013-01-01
Characterizations of tsunami hazards along the Cascadia subduction zone hinge on uncertainties in megathrust rupture models used for simulating tsunami inundation. To explore these uncertainties, we constructed 15 megathrust earthquake scenarios using rupture models that supply the initial conditions for tsunami simulations at Bandon, Oregon. Tsunami inundation varies with the amount and distribution of fault slip assigned to rupture models, including models where slip is partitioned to a splay fault in the accretionary wedge and models that vary the updip limit of slip on a buried fault. Constraints on fault slip come from onshore and offshore paleoseismological evidence. We rank each rupture model using a logic tree that evaluates a model’s consistency with geological and geophysical data. The scenarios provide inputs to a hydrodynamic model, SELFE, used to simulate tsunami generation, propagation, and inundation on unstructured grids with <5–15 m resolution in coastal areas. Tsunami simulations delineate the likelihood that Cascadia tsunamis will exceed mapped inundation lines. Maximum wave elevations at the shoreline varied from ∼4 m to 25 m for earthquakes with 9–44 m slip and Mw 8.7–9.2. Simulated tsunami inundation agrees with sparse deposits left by the A.D. 1700 and older tsunamis. Tsunami simulations for large (22–30 m slip) and medium (14–19 m slip) splay fault scenarios encompass 80%–95% of all inundation scenarios and provide reasonable guidelines for land-use planning and coastal development. The maximum tsunami inundation simulated for the greatest splay fault scenario (36–44 m slip) can help to guide development of local tsunami evacuation zones.
NASA Technical Reports Server (NTRS)
MacLeod, Todd C.; Ho, Fat D.
2004-01-01
A model of an n-channel ferroelectric field effect transistor has been developed based on both theoretical and empirical data. The model is based on an existing model that incorporates partitioning of the ferroelectric layer to calculate the polarization within the ferroelectric material. The model incorporates several new aspects that are useful to the user. It takes into account the effect of a non-saturating gate voltage only partially polarizing the ferroelectric material based on the existing remnant polarization. The model also incorporates the decay of the remnant polarization based on the time history of the FFET. A gate pulse of a specific voltage; will not put the ferroelectric material into a single amount of polarization for that voltage, but instead vary with previous state of the material and the time since the last change to the gate voltage. The model also utilizes data from FFETs made from different types of ferroelectric materials to allow the user just to input the material being used and not recreate the entire model. The model also allows the user to input the quality of the ferroelectric material being used. The ferroelectric material quality can go from a theoretical perfect material with little loss and no decay to a less than perfect material with remnant losses and decay. This model is designed to be used by people who need to predict the external characteristics of a FFET before the time and expense of design and fabrication. It also allows the parametric evaluation of quality of the ferroelectric film on the overall performance of the transistor.
Effects of uncertain topographic input data on two-dimensional flow modeling in a gravel-bed river
Legleiter, C.J.; Kyriakidis, P.C.; McDonald, R.R.; Nelson, J.M.
2011-01-01
Many applications in river research and management rely upon two-dimensional (2D) numerical models to characterize flow fields, assess habitat conditions, and evaluate channel stability. Predictions from such models are potentially highly uncertain due to the uncertainty associated with the topographic data provided as input. This study used a spatial stochastic simulation strategy to examine the effects of topographic uncertainty on flow modeling. Many, equally likely bed elevation realizations for a simple meander bend were generated and propagated through a typical 2D model to produce distributions of water-surface elevation, depth, velocity, and boundary shear stress at each node of the model's computational grid. Ensemble summary statistics were used to characterize the uncertainty associated with these predictions and to examine the spatial structure of this uncertainty in relation to channel morphology. Simulations conditioned to different data configurations indicated that model predictions became increasingly uncertain as the spacing between surveyed cross sections increased. Model sensitivity to topographic uncertainty was greater for base flow conditions than for a higher, subbankfull flow (75% of bankfull discharge). The degree of sensitivity also varied spatially throughout the bend, with the greatest uncertainty occurring over the point bar where the flow field was influenced by topographic steering effects. Uncertain topography can therefore introduce significant uncertainty to analyses of habitat suitability and bed mobility based on flow model output. In the presence of such uncertainty, the results of these studies are most appropriately represented in probabilistic terms using distributions of model predictions derived from a series of topographic realizations. Copyright 2011 by the American Geophysical Union.
Chen, Dingjiang; Lu, Jun; Wang, Hailong; Shen, Yena; Kimberley, Mark O
2010-02-01
Riverine retention decreases loads of nitrogen (N) and phosphorus (P) in running water. It is an important process in nutrient cycling in watersheds. However, temporal riverine nutrient retention capacity varies due to changes in hydrological, ecological, and nutrient inputs into the watershed. Quantitative information of seasonal riverine N and P retention is critical for developing strategies to combat diffuse source pollution and eutrophication in riverine and coastal systems. This study examined seasonal variation of riverine total N (TN) and total P (TP) retention in the ChangLe River, an agricultural drainage river in east China. Water quality, hydrological parameters, and hydrophyte coverage were monitored along the ChangLe River monthly during 2004-2006. Nutrient export loads (including chemical fertilizer, livestock, and domestic sources) entering the river from the catchment area were computed using an export coefficient model based on estimated nutrient sources. Riverine TN and TP retention loads (RNRL and RPRL) were estimated using mass balance calculations. Temporal variations in riverine nutrient retention were analyzed statistically. Estimated annual riverine retention loads ranged from 1,538 to 2,127 t year(-1) for RNRL and from 79.4 to 90.4 t year(-1) for RPRL. Monthly retention loads varied from 6.4 to 300.8 t month(-1) for RNRL and from 1.4 to 15.3 t month(-1) for RPRL. Both RNRL and RPRL increased with river flow, water temperature, hydrophyte coverage, monthly sunshine hours, and total TN and TP inputs. Dissolved oxygen concentration and the pH level of the river water decreased with RNRL and RPRL. Riverine nutrient retention ratios (retention as a percentage of total input) were only related to hydrophyte coverage and monthly sunshine hours. Monthly variations in RNRL and RPRL were functions of TN and TP loads. Riverine nutrient retention capacity varied with environmental conditions. Annual RNRL and RPRL accounted for 30.3-48.3% and 52.5-71.2%, respectively, of total input TN and TP loads in the ChangLe River. Monthly riverine retention ratios were 3.5-88.7% for TN and 20.5-92.6% for TP. Hydrophyte growth and coverage on the river bed is the main cause for seasonal variation in riverine nutrient retention capacity. The total input TN and TP loads were the best indicators of RNRL and RPRL, respectively. High riverine nutrient retention capacity during summer due to hydrophytic growth is favorable to the avoidance of algal bloom in both river systems and coastal water in southeast China. Policies should be developed to strictly control nutrient applications on agricultural lands. Strategies for promoting hydrophyte growth in rivers are desirable for water quality management.
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
Coupled oscillators in identification of nonlinear damping of a real parametric pendulum
NASA Astrophysics Data System (ADS)
Olejnik, Paweł; Awrejcewicz, Jan
2018-01-01
A damped parametric pendulum with friction is identified twice by means of its precise and imprecise mathematical model. A laboratory test stand designed for experimental investigations of nonlinear effects determined by a viscous resistance and the stick-slip phenomenon serves as the model mechanical system. An influence of accurateness of mathematical modeling on the time variability of the nonlinear damping coefficient of the oscillator is proved. A free decay response of a precisely and imprecisely modeled physical pendulum is dependent on two different time-varying coefficients of damping. The coefficients of the analyzed parametric oscillator are identified with the use of a new semi-empirical method based on a coupled oscillators approach, utilizing the fractional order derivative of the discrete measurement series treated as an input to the numerical model. Results of application of the proposed method of identification of the nonlinear coefficients of the damped parametric oscillator have been illustrated and extensively discussed.
Tang, Xiaoming; Qu, Hongchun; Wang, Ping; Zhao, Meng
2015-03-01
This paper investigates the off-line synthesis approach of model predictive control (MPC) for a class of networked control systems (NCSs) with network-induced delays. A new augmented model which can be readily applied to time-varying control law, is proposed to describe the NCS where bounded deterministic network-induced delays may occur in both sensor to controller (S-A) and controller to actuator (C-A) links. Based on this augmented model, a sufficient condition of the closed-loop stability is derived by applying the Lyapunov method. The off-line synthesis approach of model predictive control is addressed using the stability results of the system, which explicitly considers the satisfaction of input and state constraints. Numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Real-time implementation of biofidelic SA1 model for tactile feedback.
Russell, A F; Armiger, R S; Vogelstein, R J; Bensmaia, S J; Etienne-Cummings, R
2009-01-01
In order for the functionality of an upper-limb prosthesis to approach that of a real limb it must be able to, accurately and intuitively, convey sensory feedback to the limb user. This paper presents results of the real-time implementation of a 'biofidelic' model that describes mechanotransduction in Slowly Adapting Type 1 (SA1) afferent fibers. The model accurately predicts the timing of action potentials for arbitrary force or displacement stimuli and its output can be used as stimulation times for peripheral nerve stimulation by a neuroprosthetic device. The model performance was verified by comparing the predicted action potential (or spike) outputs against measured spike outputs for different vibratory stimuli. Furthermore experiments were conducted to show that, like real SA1 fibers, the model's spike rate varies according to input pressure and that a periodic 'tapping' stimulus evokes periodic spike outputs.
The human motor neuron pools receive a dominant slow‐varying common synaptic input
Negro, Francesco; Yavuz, Utku Şükrü
2016-01-01
Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459
NASA Astrophysics Data System (ADS)
Flores, A. N.; Entekhabi, D.; Bras, R. L.
2007-12-01
Soil hydraulic and thermal properties (SHTPs) affect both the rate of moisture redistribution in the soil column and the volumetric soil water capacity. Adequately constraining these properties through field and lab analysis to parameterize spatially-distributed hydrology models is often prohibitively expensive. Because SHTPs vary significantly at small spatial scales individual soil samples are also only reliably indicative of local conditions, and these properties remain a significant source of uncertainty in soil moisture and temperature estimation. In ensemble-based soil moisture data assimilation, uncertainty in the model-produced prior estimate due to associated uncertainty in SHTPs must be taken into account to avoid under-dispersive ensembles. To treat SHTP uncertainty for purposes of supplying inputs to a distributed watershed model we use the restricted pairing (RP) algorithm, an extension of Latin Hypercube (LH) sampling. The RP algorithm generates an arbitrary number of SHTP combinations by sampling the appropriate marginal distributions of the individual soil properties using the LH approach, while imposing a target rank correlation among the properties. A previously-published meta- database of 1309 soils representing 12 textural classes is used to fit appropriate marginal distributions to the properties and compute the target rank correlation structure, conditioned on soil texture. Given categorical soil textures, our implementation of the RP algorithm generates an arbitrarily-sized ensemble of realizations of the SHTPs required as input to the TIN-based Realtime Integrated Basin Simulator with vegetation dynamics (tRIBS+VEGGIE) distributed parameter ecohydrology model. Soil moisture ensembles simulated with RP- generated SHTPs exhibit less variance than ensembles simulated with SHTPs generated by a scheme that neglects correlation among properties. Neglecting correlation among SHTPs can lead to physically unrealistic combinations of parameters that exhibit implausible hydrologic behavior when input to the tRIBS+VEGGIE model.
Feng, Yang; Friedrichs, Marjorie A M; Wilkin, John; Tian, Hanqin; Yang, Qichun; Hofmann, Eileen E; Wiggert, Jerry D; Hood, Raleigh R
2015-08-01
The Chesapeake Bay plays an important role in transforming riverine nutrients before they are exported to the adjacent continental shelf. Although the mean nitrogen budget of the Chesapeake Bay has been previously estimated from observations, uncertainties associated with interannually varying hydrological conditions remain. In this study, a land-estuarine-ocean biogeochemical modeling system is developed to quantify Chesapeake riverine nitrogen inputs, within-estuary nitrogen transformation processes and the ultimate export of nitrogen to the coastal ocean. Model skill was evaluated using extensive in situ and satellite-derived data, and a simulation using environmental conditions for 2001-2005 was conducted to quantify the Chesapeake Bay nitrogen budget. The 5 year simulation was characterized by large riverine inputs of nitrogen (154 × 10 9 g N yr -1 ) split roughly 60:40 between inorganic:organic components. Much of this was denitrified (34 × 10 9 g N yr -1 ) and buried (46 × 10 9 g N yr -1 ) within the estuarine system. A positive net annual ecosystem production for the bay further contributed to a large advective export of organic nitrogen to the shelf (91 × 10 9 g N yr -1 ) and negligible inorganic nitrogen export. Interannual variability was strong, particularly for the riverine nitrogen fluxes. In years with higher than average riverine nitrogen inputs, most of this excess nitrogen (50-60%) was exported from the bay as organic nitrogen, with the remaining split between burial, denitrification, and inorganic export to the coastal ocean. In comparison to previous simulations using generic shelf biogeochemical model formulations inside the estuary, the estuarine biogeochemical model described here produced more realistic and significantly greater exports of organic nitrogen and lower exports of inorganic nitrogen to the shelf.
Friedrichs, Marjorie A. M.; Wilkin, John; Tian, Hanqin; Yang, Qichun; Hofmann, Eileen E.; Wiggert, Jerry D.; Hood, Raleigh R.
2015-01-01
Abstract The Chesapeake Bay plays an important role in transforming riverine nutrients before they are exported to the adjacent continental shelf. Although the mean nitrogen budget of the Chesapeake Bay has been previously estimated from observations, uncertainties associated with interannually varying hydrological conditions remain. In this study, a land‐estuarine‐ocean biogeochemical modeling system is developed to quantify Chesapeake riverine nitrogen inputs, within‐estuary nitrogen transformation processes and the ultimate export of nitrogen to the coastal ocean. Model skill was evaluated using extensive in situ and satellite‐derived data, and a simulation using environmental conditions for 2001–2005 was conducted to quantify the Chesapeake Bay nitrogen budget. The 5 year simulation was characterized by large riverine inputs of nitrogen (154 × 109 g N yr−1) split roughly 60:40 between inorganic:organic components. Much of this was denitrified (34 × 109 g N yr−1) and buried (46 × 109 g N yr−1) within the estuarine system. A positive net annual ecosystem production for the bay further contributed to a large advective export of organic nitrogen to the shelf (91 × 109 g N yr−1) and negligible inorganic nitrogen export. Interannual variability was strong, particularly for the riverine nitrogen fluxes. In years with higher than average riverine nitrogen inputs, most of this excess nitrogen (50–60%) was exported from the bay as organic nitrogen, with the remaining split between burial, denitrification, and inorganic export to the coastal ocean. In comparison to previous simulations using generic shelf biogeochemical model formulations inside the estuary, the estuarine biogeochemical model described here produced more realistic and significantly greater exports of organic nitrogen and lower exports of inorganic nitrogen to the shelf. PMID:27668137
NASA Astrophysics Data System (ADS)
Scott, M. E.; Sykes, J. F.
2006-12-01
The Grand River Watershed is one of the largest watersheds in southwestern Ontario with an area of approximately 7000 square kilometers. Ninety percent of the watershed is classified as rural, and 80 percent of the watershed population relies on groundwater as their source of drinking water. Management of the watershed requires the determination of the effect of agricultural practices on long-term groundwater quality and to identify locations within the watershed that are at a higher risk of contamination. The study focuses on the transport of nitrate through the root zone as a result of agricultural inputs with attenuation due to biodegradation. The driving force for transport is spatially and temporally varying groundwater recharge that is a function of land use/land cover, soil and meteorological inputs that yields 47,229 unique soil columns within the watershed. Fertilizer sources are determined from Statistics Canada's Agricultural Census and include livestock manure and a popular commercial fertilizer, urea. Accounting for different application rates yields 60,066 unique land parcels of which 22,809 are classified as croplands where manure and inorganic fertilizes are directly applied. The transport for the croplands is simulated over a 14-year period to investigate the impact of seasonal applications of nitrate fertilizers on the concentration leaching from the root zone to the water table. Based on land use/land cover maps, ArcView GIS is used to define the location of fertilizer applications within the watershed and to spatially visualize data and analyze results. The large quantity of input data is stored and managed using MS-Access and a relational database management system. Nitrogen transformations and ammonium and nitrate uptake by plants and transport through the soil column are simulated on a daily basis using Visual Basic for Applications (VBA) within MS-Access modules. Nitrogen transformations within the soil column were simplified using parameters that were obtained from literature or could be calculated from readily available soil information for the Grand River Watershed. Spatially and seasonally averaged results for the 14 year period indicate that nitrate leaching through the root zone does not exceed the maximum contaminant level (MCL) of 10 mg/l nitrate. However, in 1992, over 12 percent of the watershed area in crops exceeded the MCL during the winter season. The characteristically well drained soils of the central region of the watershed are more susceptible to groundwater contamination following autumn manure-N applications, as no crop-growth is present to remove excess nitrogen from the system. Therefore, farm best management practices do not ensure that groundwater contamination will not occur. This research is an important first step in developing agricultural contaminant loadings for a watershed scale surface water and groundwater model. Municipalities can utilize this model as a management tool to determine the extent of contamination and delineate site sensitive locations, such as well-head protection zones. Other applications of this model include risk assessments of contaminant migration due to climate change predictions, varying fertilizer application practices, modifications in crop management and changes in land use. The impact of climate change on recharge has been investigated.
Anthropogenic nitrogen sources and exports in a village-scale catchment in Southeast China.
Cao, Wenzhi; Hong, Huasheng; Zhang, Yuzhen; Chen, Nengwang; Zeng, Yue; Wang, Weiping
2006-01-01
An experimental village-scale catchment was selected for investigation of nitrogen (N) sources and exports. The mean N application rate over the catchment was 350.2 kg N ha(-1), but this rate varied spatially and temporally. The N leaching loss rate varied from 8.1 to 52.7 kg N ha(-1) under different land use regimes. The average N leaching loss rate was 13.4 kg N ha(-1) over the whole catchment, representing about 3.8% of the total N inputs. The N export rate through stormflows was 28.8 kg N ha(-1), about 8.2% of the total N inputs. Seasonal patterns showed that 95% of N exports through stormflows occurred during July to September in 2002. Overall, the maximum riverine N exports were 12.1% of total N inputs and 15.5% of the inorganic fertilizer N applied. Understanding N sources and exports in a village-scale catchment can provide a knowledge base for amelioration of diffuse agricultural pollution.
Constraints in distortion-invariant target recognition system simulation
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Razzaque, Md A.
2000-11-01
Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Ruilin; Yuan, Chengxun, E-mail: yuancx@hit.edu.cn, E-mail: zhouzx@hit.edu.cn; Jia, Jieshu
The interaction between microwave and large area plasma is crucially important for space communication. Gas pressure, input power, and plasma volume are critical to both the microwave electromagnetic wave phase shift and electron density. This paper presents a novel type of large coaxial gridded hollow cathode plasma having a 50 cm diameter and a 40 cm thickness. Microwave characteristics are studied using a microwave measurement system that includes two broadband antennae in the range from 2 GHz to 18 GHz. The phase shift under varying gas pressure and input power is shown. In addition, the electron density n{sub e}, whichmore » varies from 1.2 × 10{sup 16} m{sup −3} to 8.7 × 10{sup 16} m{sup −3} under different discharge conditions, is diagnosed by the microwave system. The measured results accord well with those acquired by Langmuir Probe measurement and show that the microwave properties in the large volume hollow cathode discharge significantly depend on the input power and gas pressure.« less
Reconfigurable Drive Current System
NASA Technical Reports Server (NTRS)
Alhorn, Dean C. (Inventor); Dutton, Kenneth R. (Inventor); Howard, David E. (Inventor); Smith, Dennis A. (Inventor)
2017-01-01
A reconfigurable drive current system includes drive stages, each of which includes a high-side transistor and a low-side transistor in a totem pole configuration. A current monitor is coupled to an output of each drive stage. Input channels are provided to receive input signals. A processor is coupled to the input channels and to each current monitor for generating at least one drive signal using at least one of the input signals and current measured by at least one of the current monitors. A pulse width modulation generator is coupled to the processor and each drive stage for varying the drive signals as a function of time prior to being supplied to at least one of the drive stages.
Improved disturbance rejection for predictor-based control of MIMO linear systems with input delay
NASA Astrophysics Data System (ADS)
Shi, Shang; Liu, Wenhui; Lu, Junwei; Chu, Yuming
2018-02-01
In this paper, we are concerned with the predictor-based control of multi-input multi-output (MIMO) linear systems with input delay and disturbances. By taking the future values of disturbances into consideration, a new improved predictive scheme is proposed. Compared with the existing predictive schemes, our proposed predictive scheme can achieve a finite-time exact state prediction for some smooth disturbances including the constant disturbances, and a better disturbance attenuation can also be achieved for a large class of other time-varying disturbances. The attenuation of mismatched disturbances for second-order linear systems with input delay is also investigated by using our proposed predictor-based controller.
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Implementation of ERDC HEP Geo-Material Model in CTH and Application
2011-11-02
used TARDEC JWL inputs for C4 and Johnson- Cook Strength inputs TARDEC JC fracture model inputs for 5083 plate changed due to problems seen in...fracture inputs from IMD tests - LS-DYNA C4 JWL and Johnson-Cook strength inputs used in CTH runs - Results indicate that TARDEC JC fracture model
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
NASA Technical Reports Server (NTRS)
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Interactive High-Relief Reconstruction for Organic and Double-Sided Objects from a Photo.
Yeh, Chih-Kuo; Huang, Shi-Yang; Jayaraman, Pradeep Kumar; Fu, Chi-Wing; Lee, Tong-Yee
2017-07-01
We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.
Informing Selection of Nanomaterial Concentrations for ...
Little justification is generally provided for selection of in vitro assay testing concentrations for engineered nanomaterials (ENMs). Selection of concentration levels for hazard evaluation based on real-world exposure scenarios is desirable. We reviewed published ENM concentrations measured in air in manufacturing and R&D labs to identify input levels for estimating ENM mass retained in the human lung using the Multiple-Path Particle Dosimetry (MPPD) model. Model input parameters were individually varied to estimate alveolar mass retained for different particle sizes (5-1000 nm), aerosol concentrations (0.1, 1 mg/m3), aspect ratios (2, 4, 10, 167), and exposure durations (24 hours and a working lifetime). The calculated lung surface concentrations were then converted to in vitro solution concentrations. Modeled alveolar mass retained after 24 hours is most affected by activity level and aerosol concentration. Alveolar retention for Ag and TiO2 nanoparticles and CNTs for a working lifetime (45 years) exposure duration is similar to high-end concentrations (~ 30-400 μg/mL) typical of in vitro testing reported in the literature. Analyses performed are generally applicable to provide ENM testing concentrations for in vitro hazard screening studies though further research is needed to improve the approach. Understanding the relationship between potential real-world exposures and in vitro test concentrations will facilitate interpretation of toxicological results
A Discrete Fracture Network Model with Stress-Driven Nucleation and Growth
NASA Astrophysics Data System (ADS)
Lavoine, E.; Darcel, C.; Munier, R.; Davy, P.
2017-12-01
The realism of Discrete Fracture Network (DFN) models, beyond the bulk statistical properties, relies on the spatial organization of fractures, which is not issued by purely stochastic DFN models. The realism can be improved by injecting prior information in DFN from a better knowledge of the geological fracturing processes. We first develop a model using simple kinematic rules for mimicking the growth of fractures from nucleation to arrest, in order to evaluate the consequences of the DFN structure on the network connectivity and flow properties. The model generates fracture networks with power-law scaling distributions and a percentage of T-intersections that are consistent with field observations. Nevertheless, a larger complexity relying on the spatial variability of natural fractures positions cannot be explained by the random nucleation process. We propose to introduce a stress-driven nucleation in the timewise process of this kinematic model to study the correlations between nucleation, growth and existing fracture patterns. The method uses the stress field generated by existing fractures and remote stress as an input for a Monte-Carlo sampling of nuclei centers at each time step. Networks so generated are found to have correlations over a large range of scales, with a correlation dimension that varies with time and with the function that relates the nucleation probability to stress. A sensibility analysis of input parameters has been performed in 3D to quantify the influence of fractures and remote stress field orientations.
A Flexible Cosmic Ultraviolet Background Model
NASA Astrophysics Data System (ADS)
McQuinn, Matthew
2016-10-01
HST studies of the IGM, of the CGM, and of reionization-era galaxies are all aided by ionizing background models, which are a critical input in modeling the ionization state of diffuse, 10^4 K gas. The ionization state in turn enables the determination of densities and sizes of absorbing clouds and, when applied to the Ly-a forest, the global ionizing emissivity of sources. Unfortunately, studies that use these background models have no way of gauging the amount of uncertainty in the adopted model other than to recompute their results using previous background models with outdated observational inputs. As of yet there has been no systematic study of uncertainties in the background model and there unfortunately is no publicly available ultraviolet background code. A public code would enable users to update the calculation with the latest observational constraints, and it would allow users to experiment with varying the background model's assumptions regarding emissions and absorptions. We propose to develop a publicly available ionizing background code and, as an initial application, quantify the level of uncertainty in the ionizing background spectrum across cosmic time. As the background model improves, so does our understanding of (1) the sources that dominate ionizing emissions across cosmic time and (2) the properties of diffuse gas in the circumgalactic medium, the WHIM, and the Ly-a forest. HST is the primary telescope for studying both the highest redshift galaxies and low-redshift diffuse gas. The proposed program would benefit HST studies of the Universe at z 0 all the way up to z = 10, including of high-z galaxies observed in the HST Frontier Fields.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2016-04-01
The aim of this study is to investigate how the inclusion of uncertainties in inputs and observed streamflow influence the parameter estimation, streamflow predictions and model evaluation. In particular we wanted to answer the following research questions: • What is the effect of including a random error in the precipitation and temperature inputs? • What is the effect of decreased information about precipitation by excluding the nearest precipitation station? • What is the effect of the uncertainty in streamflow observations? • What is the effect of reduced information about the true streamflow by using a rating curve where the measurement of the highest and lowest streamflow is excluded when estimating the rating curve? To answer these questions, we designed a set of calibration experiments and evaluation strategies. We used the elevation distributed HBV model operating on daily time steps combined with a Bayesian formulation and the MCMC routine Dream for parameter inference. The uncertainties in inputs was represented by creating ensembles of precipitation and temperature. The precipitation ensemble were created using a meta-gaussian random field approach. The temperature ensembles were created using a 3D Bayesian kriging with random sampling of the temperature laps rate. The streamflow ensembles were generated by a Bayesian multi-segment rating curve model. Precipitation and temperatures were randomly sampled for every day, whereas the streamflow ensembles were generated from rating curve ensembles, and the same rating curve was always used for the whole time series in a calibration or evaluation run. We chose a catchment with a meteorological station measuring precipitation and temperature, and a rating curve of relatively high quality. This allowed us to investigate and further test the effect of having less information on precipitation and streamflow during model calibration, predictions and evaluation. The results showed that including uncertainty in the precipitation and temperature input has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Reduced information in precipitation input resulted in a and a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using wrong rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions obtained using a wrong rating curve, the evaluation scores varies depending on the true rating curve. Generally, the best evaluation scores were not achieved for the rating curve used for calibration, but for a rating curves giving low variance in streamflow observations. Reduced information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores giving both better and worse scores. This case study shows that estimating the water balance is challenging since both precipitation inputs and streamflow observations have pronounced systematic component in their uncertainties.
Effect of corn stover compositional variability on minimum ethanol selling price (MESP).
Tao, Ling; Templeton, David W; Humbird, David; Aden, Andy
2013-07-01
A techno-economic sensitivity analysis was performed using a National Renewable Energy Laboratory (NREL) 2011 biochemical conversion design model varying feedstock compositions. A total of 496 feedstock near infrared (NIR) compositions from 47 locations in eight US Corn Belt states were used as the inputs to calculate minimum ethanol selling price (MESP), ethanol yield (gallons per dry ton biomass feedstock), ethanol annual production, as well as total installed project cost for each composition. From this study, the calculated MESP is $2.20 ± 0.21 (average ± 3 SD) per gallon ethanol. Copyright © 2013. Published by Elsevier Ltd.
Dynamic analysis of a buckled asymmetric piezoelectric beam for energy harvesting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Blarigan, Louis, E-mail: louis01@umail.ucsb.edu; Moehlis, Jeff
2016-03-15
A model of a buckled beam energy harvester is analyzed to determine the phenomena behind the transition between high and low power output levels. It is shown that the presence of a chaotic attractor is a sufficient condition to predict high power output, though there are relatively small areas where high output is achieved without a chaotic attractor. The chaotic attractor appears as a product of a period doubling cascade or a boundary crisis. Bifurcation diagrams provide insight into the development of the chaotic region as the input power level is varied, as well as the intermixed periodic windows.
NASA Technical Reports Server (NTRS)
Dunbar, D. N.; Tunnah, B. G.
1978-01-01
The FORTRAN computing program predicts the flow streams and material, energy, and economic balances of a typical petroleum refinery, with particular emphasis on production of aviation turbine fuel of varying end point and hydrogen content specifications. The program has provision for shale oil and coal oil in addition to petroleum crudes. A case study feature permits dependent cases to be run for parametric or optimization studies by input of only the variables which are changed from the base case. The report has sufficient detail for the information of most readers.
A New Paradigm for Diagnosing Contributions to Model Aerosol Forcing Error
NASA Astrophysics Data System (ADS)
Jones, A. L.; Feldman, D. R.; Freidenreich, S.; Paynter, D.; Ramaswamy, V.; Collins, W. D.; Pincus, R.
2017-12-01
A new paradigm in benchmark absorption-scattering radiative transfer is presented that enables both the globally averaged and spatially resolved testing of climate model radiation parameterizations in order to uncover persistent sources of biases in the aerosol instantaneous radiative effect (IRE). A proof of concept is demonstrated with the Geophysical Fluid Dynamics Laboratory AM4 and Community Earth System Model 1.2.2 climate models. Instead of prescribing atmospheric conditions and aerosols, as in prior intercomparisons, native snapshots of the atmospheric state and aerosol optical properties from the participating models are used as inputs to an accurate radiation solver to uncover model-relevant biases. These diagnostic results show that the models' aerosol IRE bias is of the same magnitude as the persistent range cited ( 1 W/m2) and also varies spatially and with intrinsic aerosol optical properties. The findings underscore the significance of native model error analysis and its dispositive ability to diagnose global biases, confirming its fundamental value for the Radiative Forcing Model Intercomparison Project.
A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA
NASA Astrophysics Data System (ADS)
Khodabakhshi, Mohammad
2009-08-01
This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.
Pragmatic geometric model evaluation
NASA Astrophysics Data System (ADS)
Pamer, Robert
2015-04-01
Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to calculate basically two model variations that can be seen as geometric extremes of all available input data. This does not lead to a probability distribution for the spatial position of geometric elements but it defines zones of major (or minor resp.) geometric variations due to data uncertainty. Both model evaluations are then analyzed together to give ranges of possible model outcomes in metric units.
NASA Astrophysics Data System (ADS)
Patnaik, S.; Biswal, B.; Sharma, V. C.
2017-12-01
River flow varies greatly in space and time, and the single biggest challenge for hydrologists and ecologists around the world is the fact that most rivers are either ungauged or poorly gauged. Although it is relatively easier to predict long-term average flow of a river using the `universal' zero-parameter Budyko model, lack of data hinders short-term flow prediction at ungauged locations using traditional hydrological models as they require observed flow data for model calibration. Flow prediction in ungauged basins thus requires a dynamic 'zero-parameter' hydrological model. One way to achieve this is to regionalize a dynamic hydrological model's parameters. However, a regionalization method based zero-parameter dynamic hydrological model is not `universal'. An alternative attempt was made recently to develop a zero-parameter dynamic model by defining an instantaneous dryness index as a function of antecedent rainfall and solar energy inputs with the help of a decay function and using the original Budyko function. The model was tested first in 63 US catchments and later in 50 Indian catchments. The median Nash-Sutcliffe efficiency (NSE) was found to be close to 0.4 in both the cases. Although improvements need to be incorporated in order to use the model for reliable prediction, the main aim of this study was to rather understand hydrological processes. The overall results here seem to suggest that the dynamic zero-parameter Budyko model is `universal.' In other words natural catchments around the world are strikingly similar to each other in the way they respond to hydrologic inputs; we thus need to focus more on utilizing catchment similarities in hydrological modelling instead of over parameterizing our models.
Lee, Cameron C; Sheridan, Scott C
2018-07-01
Temperature-mortality relationships are nonlinear, time-lagged, and can vary depending on the time of year and geographic location, all of which limits the applicability of simple regression models in describing these associations. This research demonstrates the utility of an alternative method for modeling such complex relationships that has gained recent traction in other environmental fields: nonlinear autoregressive models with exogenous input (NARX models). All-cause mortality data and multiple temperature-based data sets were gathered from 41 different US cities, for the period 1975-2010, and subjected to ensemble NARX modeling. Models generally performed better in larger cities and during the winter season. Across the US, median absolute percentage errors were 10% (ranging from 4% to 15% in various cities), the average improvement in the r-squared over that of a simple persistence model was 17% (6-24%), and the hit rate for modeling spike days in mortality (>80th percentile) was 54% (34-71%). Mortality responded acutely to hot summer days, peaking at 0-2 days of lag before dropping precipitously, and there was an extended mortality response to cold winter days, peaking at 2-4 days of lag and dropping slowly and continuing for multiple weeks. Spring and autumn showed both of the aforementioned temperature-mortality relationships, but generally to a lesser magnitude than what was seen in summer or winter. When compared to distributed lag nonlinear models, NARX model output was nearly identical. These results highlight the applicability of NARX models for use in modeling complex and time-dependent relationships for various applications in epidemiology and environmental sciences. Copyright © 2018 Elsevier Inc. All rights reserved.
A Physiologically Based, Multi-Scale Model of Skeletal Muscle Structure and Function
Röhrle, O.; Davidson, J. B.; Pullan, A. J.
2012-01-01
Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle’s response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modeling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle’s response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modeling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibers and their grouping. Together with a well-established model of motor-unit recruitment, the electro-physiological behavior of single muscle fibers within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenization. The effect of homogenization has been investigated by varying the number of embedded skeletal muscle fibers and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the tibialis anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modeling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behavior ranging from motor-unit recruitment to force generation and fatigue. PMID:22993509
Applicability of models to estimate traffic noise for urban roads.
Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M
2015-01-01
Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.
NASA Astrophysics Data System (ADS)
Vairamuthu, G.; Thangagiri, B.; Sundarapandian, S.
2018-01-01
The present work investigates the effect of varying Nozzle Opening Pressures (NOP) from 220 bar to 250 bar on performance, emissions and combustion characteristics of Calophyllum inophyllum Methyl Ester (CIME) in a constant speed, Direct Injection (DI) diesel engine using Artificial Neural Network (ANN) approach. An ANN model has been developed to predict a correlation between specific fuel consumption (SFC), brake thermal efficiency (BTE), exhaust gas temperature (EGT), Unburnt hydrocarbon (UBHC), CO, CO2, NOx and smoke density using load, blend (B0 and B100) and NOP as input data. A standard Back-Propagation Algorithm (BPA) for the engine is used in this model. A Multi Layer Perceptron network (MLP) is used for nonlinear mapping between the input and the output parameters. An ANN model can predict the performance of diesel engine and the exhaust emissions with correlation coefficient (R2) in the range of 0.98-1. Mean Relative Errors (MRE) values are in the range of 0.46-5.8%, while the Mean Square Errors (MSE) are found to be very low. It is evident that the ANN models are reliable tools for the prediction of DI diesel engine performance and emissions. The test results show that the optimum NOP is 250 bar with B100.
Review of modelling air pollution from traffic at street-level - The state of the science.
Forehead, H; Huynh, N
2018-06-13
Traffic emissions are a complex and variable cocktail of toxic chemicals. They are the major source of atmospheric pollution in the parts of cities where people live, commute and work. Reducing exposure requires information about the distribution and nature of emissions. Spatially and temporally detailed data are required, because both the rate of production and the composition of emissions vary significantly with time of day and with local changes in wind, traffic composition and flow. Increasing computer processing power means that models can accept highly detailed inputs of fleet, fuels and road networks. The state of the science models can simulate the behaviour and emissions of all the individual vehicles on a road network, with resolution of a second and tens of metres. The chemistry of the simulated emissions is also highly resolved, due to consideration of multiple engine processes, fuel evaporation and tyre wear. Good results can be achieved with both commercially available and open source models. The extent of a simulation is usually limited by processing capacity; the accuracy by the quality of traffic data. Recent studies have generated real time, detailed emissions data by using inputs from novel traffic sensing technologies and data from intelligent traffic systems (ITS). Increasingly, detailed pollution data is being combined with spatially resolved demographic or epidemiological data for targeted risk analyses. Copyright © 2018 Elsevier Ltd. All rights reserved.
Mirus, Benjamin B.; Nimmo, J.R.
2013-01-01
The impact of preferential flow on recharge and contaminant transport poses a considerable challenge to water-resources management. Typical hydrologic models require extensive site characterization, but can underestimate fluxes when preferential flow is significant. A recently developed source-responsive model incorporates film-flow theory with conservation of mass to estimate unsaturated-zone preferential fluxes with readily available data. The term source-responsive describes the sensitivity of preferential flow in response to water availability at the source of input. We present the first rigorous tests of a parsimonious formulation for simulating water table fluctuations using two case studies, both in arid regions with thick unsaturated zones of fractured volcanic rock. Diffuse flow theory cannot adequately capture the observed water table responses at both sites; the source-responsive model is a viable alternative. We treat the active area fraction of preferential flow paths as a scaled function of water inputs at the land surface then calibrate the macropore density to fit observed water table rises. Unlike previous applications, we allow the characteristic film-flow velocity to vary, reflecting the lag time between source and deep water table responses. Analysis of model performance and parameter sensitivity for the two case studies underscores the importance of identifying thresholds for initiation of film flow in unsaturated rocks, and suggests that this parsimonious approach is potentially of great practical value.
Tao, Zhongping; Zhang, Mu
2014-01-01
Abstract Functional imaging studies have indicated hemispheric asymmetry of activation in bilateral supplementary motor area (SMA) during unimanual motor tasks. However, the hemispherically special roles of bilateral SMAs on primary motor cortex (M1) in the effective connectivity networks (ECN) during lateralized tasks remain unclear. Aiming to study the differential contribution of bilateral SMAs during the motor execution and motor imagery tasks, and the hemispherically asymmetric patterns of ECN among regions involved, the present study used dynamic causal modeling to analyze the functional magnetic resonance imaging data of the unimanual motor execution/imagery tasks in 12 right-handed subjects. Our results demonstrated that distributions of network parameters underlying motor execution and motor imagery were significantly different. The variation was mainly induced by task condition modulations of intrinsic coupling. Particularly, regardless of the performing hand, the task input modulations of intrinsic coupling from the contralateral SMA to contralateral M1 were positive during motor execution, while varied to be negative during motor imagery. The results suggested that the inhibitive modulation suppressed the overt movement during motor imagery. In addition, the left SMA also helped accomplishing left hand tasks through task input modulation of left SMA→right SMA connection, implying that hemispheric recruitment occurred when performing nondominant hand tasks. The results specified differential and altered contributions of bilateral SMAs to the ECN during unimanual motor execution and motor imagery, and highlighted the contributions induced by the task input of motor execution/imagery. PMID:24606178
Orbital transfer rocket engine technology program: Soft wear ring seal technology
NASA Technical Reports Server (NTRS)
Lariviere, Brian W.
1992-01-01
Liquid oxygen (LOX) compatibility tests, including autogenous ignition, promoted ignition, LOX impact tests, and friction and wear tests on different PV products were conducted for several polymer materials as verification for the implementation of soft wear ring seals in advanced rocket engine turbopumps. Thermoplastics, polyimide based materials, and polyimide-imide base materials were compared for oxygen compatibility, specific wear coefficient, wear debris production, and heat dissipation mechanisms. A thermal model was generated that simulated the frictional heating input and calculated the surface temperature and temperature distribution within the seal. The predictions were compared against measured values. Heat loads in the model were varied to better match the test data and determine the difference between the measured and the calculated coefficients of friction.
Effects of Meteorological Data Quality on Snowpack Modeling
NASA Astrophysics Data System (ADS)
Havens, S.; Marks, D. G.; Robertson, M.; Hedrick, A. R.; Johnson, M.
2017-12-01
Detailed quality control of meteorological inputs is the most time-intensive component of running the distributed, physically-based iSnobal snow model, and the effect of data quality of the inputs on the model is unknown. The iSnobal model has been run operationally since WY2013, and is currently run in several basins in Idaho and California. The largest amount of user input during modeling is for the quality control of precipitation, temperature, relative humidity, solar radiation, wind speed and wind direction inputs. Precipitation inputs require detailed user input and are crucial to correctly model the snowpack mass. This research applies a range of quality control methods to meteorological input, from raw input with minimal cleaning, to complete user-applied quality control. The meteorological input cleaning generally falls into two categories. The first is global minimum/maximum and missing value correction that could be corrected and/or interpolated with automated processing. The second category is quality control for inputs that are not globally erroneous, yet are still unreasonable and generally indicate malfunctioning measurement equipment, such as temperature or relative humidity that remains constant, or does not correlate with daily trends observed at nearby stations. This research will determine how sensitive model outputs are to different levels of quality control and guide future operational applications.
Minerals vs. Microbes: Biogeochemical Controls on Carbon Storage in Humid Tropical Forest Soils
NASA Astrophysics Data System (ADS)
Hall, S. J.; Silver, W. L.
2012-12-01
Humid tropical forest soils contain a substantial portion (~500 Pg) of the terrestrial carbon (C) pool, yet their response to climate change remains unclear due to mechanistic uncertainty in the biogeochemical controls on soil C storage in these ecosystems. Poorly-crystalline minerals have long been known to stabilize soil C, but few studies have explored their relative importance in comparison with other likely controls such as rhizosphere processes, oxygen deficiency (anaerobiosis), and C quality. We examined relationships among soil C and a suite of biogeochemical variables measured in 162 samples from surface soils (ultisols and oxisols) collected over scales of landforms to landscapes (m - km) in the Luquillo Experimental Forest, Puerto Rico. We measured iron (Fe), aluminum (Al), and manganese (Mn) oxides in 0.5M hydrochloric acid (HCl), sodium citrate/ascorbic acid (CA), and citrate/dithionite (CD) extractions, along with clay content, root biomass, C quality (C/N ratios), and anaerobiosis using HCl-extractable reduced iron (Fe(II)) concentrations as a proxy. We used mixed-effects models to compare the relative importance of the above variables (normalized by mean and standard deviation) as predictors of soil C, with random effects to account for spatial structure. Poorly-crystalline Al oxide concentrations (CA extraction), soil C/N ratio, and Fe(II) concentrations each had highly significant (p < 0.0001) positive relationships with soil C concentrations that conveyed equivalent explanatory power, assessed by comparing standardized regression coefficients. The optimal mixed model explained 82 % of the variation of the residual sum of squares of soil C concentrations, which varied between 2 - 20 % C among samples. Fine root biomass had a weak but significantly positive association with soil C concentrations (p < 0.05), while crystalline Fe oxide concentrations (CD extraction) displayed a negative correlation (p < 0.01), and clay contents had no significant relationship. The latter results are surprising given the documented role of Fe oxides and clay minerals in C stabilization, yet may indicate the importance of C supply via roots in controlling C concentrations in humid tropical ecosystems. Samples associated with high concentrations of crystalline Fe and high clay contents may represent soils from deeper in the soil profile exposed by landslides, characterized by poorly-developed aggregate structure and fewer C inputs since disturbance. Our optimal mixed model suggested an equivalent importance of soil mineralogy, anaerobiosis, and C quality as correlates of soil C concentrations across tropical forest ecosystems varying in temperature, precipitation, and community composition. Whereas soil mineralogy may be relatively static over timescales of years to decades, O2 availability and the chemical composition of soil C inputs and can potentially vary more rapidly. Our model suggests that changes in temperature and precipitation regimes that alter O2 availability and/or increase the lability of C inputs may lead to decreased soil C storage in humid tropical forest soils.
User's manual: Subsonic/supersonic advanced panel pilot code
NASA Technical Reports Server (NTRS)
Moran, J.; Tinoco, E. N.; Johnson, F. T.
1978-01-01
Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.
Resolving the Strange Behavior of Extraterrestrial Potassium in the Upper Atmosphere
NASA Technical Reports Server (NTRS)
Plane, J. M. C.; Feng, W.; Dawkins, E.; Chipperfield, M. P.; Hoeffner, J.; Janches, D.; Marsh, D. R.
2014-01-01
It has been known since the 1960s that the layers of Na and K atoms, which occur between 80 and 105km in the Earth's atmosphere as a result of meteoric ablation, exhibit completely different seasonal behavior. In the extratropics Na varies annually, with a pronounced wintertime maximum and summertime minimum. However, K varies semiannually with a small summertime maximum and minima at the equinoxes. This contrasting behavior has never been satisfactorily explained. Here we use a combination of electronic structure and chemical kinetic rate theory to determine two key differences in the chemistries of K and Na. First, the neutralization of K+ ions is only favored at low temperatures during summer. Second, cycling between K and its major neutral reservoir KHCO3 is essentially temperature independent. A whole atmosphere model incorporating this new chemistry, together with a meteor input function, now correctly predicts the seasonal behavior of the K layer.
A dual-loop model of the human controller in single-axis tracking tasks
NASA Technical Reports Server (NTRS)
Hess, R. A.
1977-01-01
A dual loop model of the human controller in single axis compensatory tracking tasks is introduced. This model possesses an inner-loop closure which involves feeding back that portion of the controlled element output rate which is due to control activity. The sensory inputs to the human controller are assumed to be system error and control force. The former is assumed to be sensed via visual, aural, or tactile displays while the latter is assumed to be sensed in kinesthetic fashion. A nonlinear form of the model is briefly discussed. This model is then linearized and parameterized. A set of general adaptive characteristics for the parameterized model is hypothesized. These characteristics describe the manner in which the parameters in the linearized model will vary with such things as display quality. It is demonstrated that the parameterized model can produce controller describing functions which closely approximate those measured in laboratory tracking tasks for a wide variety of controlled elements.
Mitten, H.T.; Lines, G.C.; Berenbrock, Charles; Durbin, T.J.
1988-01-01
Because of the imbalance between recharge and pumpage, groundwater levels declined as much as 100 ft in some areas of Borrego Valley, California during drinking 1945-80. As an aid to analyzing the effects of pumping on the groundwater system, a three-dimensional finite-element groundwater flow model was developed. The model was calibrated for both steady-state (1945) and transient-state (1946-79) conditions. For the steady-state calibration, hydraulic conductivities of the three aquifers were varied within reasonable limits to obtain an acceptable match between measured and computed hydraulic heads. Recharge from streamflow infiltration (4,800 acre-ft/yr) was balanced by computed evapotranspiration (3,900 acre-ft/yr) and computed subsurface outflow from the model area (930 acre-ft/yr). For the transient state calibration, the volumes and distribution of net groundwater pumpage were estimated from land-use data and estimates of consumptive use for irrigated crops. The pumpage was assigned to the appropriate nodes in the model for each of seventeen 2-year time steps representing the period 1946-79. The specific yields of the three aquifers were varied within reasonable limits to obtain an acceptable match between measured and computed hydraulic heads. Groundwater pumpage input to the model was compensated by declines in both the computed evapotranspiration and the amount of groundwater in storage. (USGS)
Prediction and Computation of Corrosion Rates of A36 Mild Steel in Oilfield Seawater
NASA Astrophysics Data System (ADS)
Paul, Subir; Mondal, Rajdeep
2018-04-01
The parameters which primarily control the corrosion rate and life of steel structures are several and they vary across the different ocean and seawater as well as along the depth. While the effect of single parameter on corrosion behavior is known, the conjoint effects of multiple parameters and the interrelationship among the variables are complex. Millions sets of experiments are required to understand the mechanism of corrosion failure. Statistical modeling such as ANN is one solution that can reduce the number of experimentation. ANN model was developed using 170 sets of experimental data of A35 mild steel in simulated seawater, varying the corrosion influencing parameters SO4 2-, Cl-, HCO3 -,CO3 2-, CO2, O2, pH and temperature as input and the corrosion current as output. About 60% of experimental data were used to train the model, 20% for testing and 20% for validation. The model was developed by programming in Matlab. 80% of the validated data could predict the corrosion rate correctly. Corrosion rates predicted by the ANN model are displayed in 3D graphics which show many interesting phenomenon of the conjoint effects of multiple variables that might throw new ideas of mitigation of corrosion by simply modifying the chemistry of the constituents. The model could predict the corrosion rates of some real systems.
Incorporating spike-rate adaptation into a rate code in mathematical and biological neurons
Ralston, Bridget N.; Flagg, Lucas Q.; Faggin, Eric
2016-01-01
For a slowly varying stimulus, the simplest relationship between a neuron's input and output is a rate code, in which the spike rate is a unique function of the stimulus at that instant. In the case of spike-rate adaptation, there is no unique relationship between input and output, because the spike rate at any time depends both on the instantaneous stimulus and on prior spiking (the “history”). To improve the decoding of spike trains produced by neurons that show spike-rate adaptation, we developed a simple scheme that incorporates “history” into a rate code. We utilized this rate-history code successfully to decode spike trains produced by 1) mathematical models of a neuron in which the mechanism for adaptation (IAHP) is specified, and 2) the gastropyloric receptor (GPR2), a stretch-sensitive neuron in the stomatogastric nervous system of the crab Cancer borealis, that exhibits long-lasting adaptation of unknown origin. Moreover, when we modified the spike rate either mathematically in a model system or by applying neuromodulatory agents to the experimental system, we found that changes in the rate-history code could be related to the biophysical mechanisms responsible for altering the spiking. PMID:26888106
Effect of Common Cryoprotectants on Critical Warming Rates and Ice Formation in Aqueous Solutions
Hopkins, Jesse B.; Badeau, Ryan; Warkentin, Matthew; Thorne, Robert E.
2012-01-01
Ice formation on warming is of comparable or greater importance to ice formation on cooling in determining survival of cryopreserved samples. Critical warming rates required for ice-free warming of vitrified aqueous solutions of glycerol, dimethyl sulfoxide, ethylene glycol, polyethylene glycol 200 and sucrose have been measured for warming rates of order 10 to 104 K/s. Critical warming rates are typically one to three orders of magnitude larger than critical cooling rates. Warming rates vary strongly with cooling rates, perhaps due to the presence of small ice fractions in nominally vitrified samples. Critical warming and cooling rate data spanning orders of magnitude in rates provide rigorous tests of ice nucleation and growth models and their assumed input parameters. Current models with current best estimates for input parameters provide a reasonable account of critical warming rates for glycerol solutions at high concentrations/low rates, but overestimate both critical warming and cooling rates by orders of magnitude at lower concentrations and larger rates. In vitrification protocols, minimizing concentrations of potentially damaging cryoprotectants while minimizing ice formation will require ultrafast warming rates, as well as fast cooling rates to minimize the required warming rates. PMID:22728046
Kaufman, Michael G.; Pelz-Stelinski, Kirsten S.; Yee, Donald A.; Juliano, Steven A.; Ostrom, Peggy H.; Walker, Edward D.
2010-01-01
1. Detritus that forms the basis for mosquito production in tree hole ecosystems can vary in type and timing of input. We investigated the contributions of plant- and animal-derived detritus to the biomass of Aedes triseriatus (Say) pupae and adults by using stable isotope (15N and 13C) techniques in lab experiments and field collections. 2. Lab-reared mosquito isotope values reflected their detrital resource base, providing a clear distinction between mosquitoes reared on plant or animal detritus. 3. Isotope values from field-collected pupae were intermediate between what would be expected if a single (either plant or animal) detrital source dominated the resource base. However, mosquito isotope values clustered most closely with plant-derived values, and a mixed feeding model analysis indicated tree floral parts contributed approximately 80% of mosquito biomass. The mixed model also indicated that animal detritus contributed approximately 30% of mosquito tissue nitrogen. 4. Pupae collected later in the season generally had isotope values that were consistent with an increased contribution from animal detritus, suggesting this resource became more nutritionally important for mosquitoes as plant inputs declined over the summer. PMID:21132121
NASA Astrophysics Data System (ADS)
Wang, Jing; Qi, Zhaohui; Wang, Gang
2017-10-01
The dynamic analysis of cable-pulley systems is investigated in this paper, where the time-varying length characteristic of the cable as well as the coupling motion between the cable and the pulleys are considered. The dynamic model for cable-pulley systems are presented based on the principle of virtual power. Firstly, the cubic spline interpolation is adopted for modeling the flexible cable elements and the virtual 1powers of tensile strain, inertia and gravity forces on the cable are formulated. Then, the coupled motions between the cable and the movable or fixed pulley are described by the input and output contact points, based on the no-slip assumption and the spatial description. The virtual powers of inertia, gravity and applied forces on the contact segment of the cable, the movable and fixed pulleys are formulated. In particular, the internal node degrees of freedom of spline cable elements are reduced, which results in that only the independent description parameters of the nodes connected to the pulleys are included in the final governing dynamic equations. At last, two cable-pulley lifting mechanisms are considered as demonstrative application examples where the vibration of the lifting process is investigated. The comparison with ADAMS models is given to prove the validity of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, W. Payton; Hokr, Milan; Shao, Hua
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Gardner, W. Payton; Hokr, Milan; Shao, Hua; ...
2016-10-19
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less