Sample records for inoperability input-output model

  1. Pandemic recovery analysis using the dynamic inoperability input-output model.

    PubMed

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  2. Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model

    DTIC Science & Technology

    2017-03-01

    set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation

  3. International trade inoperability input-output model (IT-IIM): theory and application.

    PubMed

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  4. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    PubMed

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.

  5. Risk-based input-output analysis of influenza epidemic consequences on interdependent workforce sectors.

    PubMed

    Santos, Joost R; May, Larissa; Haimar, Amine El

    2013-09-01

    Outbreaks of contagious diseases underscore the ever-looming threat of new epidemics. Compared to other disasters that inflict physical damage to infrastructure systems, epidemics can have more devastating and prolonged impacts on the population. This article investigates the interdependent economic and productivity risks resulting from epidemic-induced workforce absenteeism. In particular, we develop a dynamic input-output model capable of generating sector-disaggregated economic losses based on different magnitudes of workforce disruptions. An ex post analysis of the 2009 H1N1 pandemic in the national capital region (NCR) reveals the distribution of consequences across different economic sectors. Consequences are categorized into two metrics: (i) economic loss, which measures the magnitude of monetary losses incurred in each sector, and (ii) inoperability, which measures the normalized monetary losses incurred in each sector relative to the total economic output of that sector. For a simulated mild pandemic scenario in NCR, two distinct rankings are generated using the economic loss and inoperability metrics. Results indicate that the majority of the critical sectors ranked according to the economic loss metric comprise of sectors that contribute the most to the NCR's gross domestic product (e.g., federal government enterprises). In contrast, the majority of the critical sectors generated by the inoperability metric include sectors that are involved with epidemic management (e.g., hospitals). Hence, prioritizing sectors for recovery necessitates consideration of the balance between economic loss, inoperability, and other objectives. Although applied specifically to the NCR, the proposed methodology can be customized for other regions. © 2012 Society for Risk Analysis.

  6. Risk-Based Input-Output Analysis of Influenza Epidemic Consequences on Interdependent Workforce Sectors

    PubMed Central

    Santos, Joost R.; May, Larissa; Haimar, Amine El

    2013-01-01

    Outbreaks of contagious diseases underscore the ever-looming threat of new epidemics. Compared to other disasters that inflict physical damage to infrastructure systems, epidemics can have more devastating and prolonged impacts on the population. This paper investigates the interdependent economic and productivity risks resulting from epidemic-induced workforce absenteeism. In particular, we develop a dynamic input-output model capable of generating sector-disaggregated economic losses based on different magnitudes of workforce disruptions. An ex post analysis of the 2009 H1N1 pandemic in the National Capital Region (NCR) reveals the distribution of consequences across different economic sectors. Consequences are categorized into two metrics: (i) economic loss, which measures the magnitude of monetary losses incurred in each sector, and (ii) inoperability, which measures the normalized monetary losses incurred in each sector relative to the total economic output of that sector. For a simulated mild pandemic scenario in NCR, two distinct rankings are generated using the economic loss and inoperability metrics. Results indicate that the majority of the critical sectors ranked according to the economic loss metric comprise of sectors that contribute the most to the NCR's gross domestic product (e.g., federal government enterprises). In contrast, the majority of the critical sectors generated by the inoperability metric include sectors that are involved with epidemic management (e.g., hospitals). Hence, prioritizing sectors for recovery necessitates consideration of the balance between economic loss, inoperability, and other objectives. Although applied specifically to the NCR region, the proposed methodology can be customized for other regions. PMID:23278756

  7. The spatial and temporal `cost' of volcanic eruptions: assessing economic impact, business inoperability, and spatial distribution of risk in the Auckland region, New Zealand

    NASA Astrophysics Data System (ADS)

    McDonald, Garry W.; Smith, Nicola J.; Kim, Joon-hwan; Cronin, Shane J.; Proctor, Jon N.

    2017-07-01

    Volcanic risk assessment has historically concentrated on quantifying the frequency, magnitude, and potential diversity of physical processes of eruptions and their consequent impacts on life and property. A realistic socio-economic assessment of volcanic impact must however take into account dynamic properties of businesses and extend beyond only measuring direct infrastructure/property loss. The inoperability input-output model, heralded as one of the 10 most important accomplishments in risk analysis over the last 30 years (Kujawaski Syst Eng. 9:281-295, 2006), has become prominent over the last decade in the economic impact assessment of business disruptions. We develop a dynamic inoperability input-output model to assess the economic impacts of a hypothetical volcanic event occurring at each of 7270 unique spatial locations throughout the Auckland Volcanic Field, New Zealand. This field of at least 53 volcanoes underlies the country's largest urban area, the Auckland region, which is home to 1.4 million people and responsible for 35.3% (NZ201481.2 billion) of the nation's GDP (Statistics New Zealand 2015). We apply volcanic event characteristics for a small-medium-scale volcanic eruption scenario and assess the economic impacts of an `average' eruption in the Auckland region. Economic losses are quantified both with, and without, business mitigation and intervention responses in place. We combine this information with a recent spatial hazard probability map (Bebbington and Cronin Bull Volcanol. 73(1):55-72, 2011) to produce novel spatial economic activity `at risk' maps. Our approach demonstrates how business inoperability losses sit alongside potential life and property damage assessment in enhancing our understanding of volcanic risk mitigation.

  8. Strategic preparedness for recovery from catastrophic risks to communities and infrastructure systems of systems.

    PubMed

    Haimes, Yacov Y

    2012-11-01

    Natural and human-induced disasters affect organizations in myriad ways because of the inherent interconnectedness and interdependencies among human, cyber, and physical infrastructures, but more importantly, because organizations depend on the effectiveness of people and on the leadership they provide to the organizations they serve and represent. These human-organizational-cyber-physical infrastructure entities are termed systems of systems. Given the multiple perspectives that characterize them, they cannot be modeled effectively with a single model. The focus of this article is: (i) the centrality of the states of a system in modeling; (ii) the efficacious role of shared states in modeling systems of systems, in identification, and in the meta-modeling of systems of systems; and (iii) the contributions of the above to strategic preparedness, response to, and recovery from catastrophic risk to such systems. Strategic preparedness connotes a decision-making process and its associated actions. These must be: implemented in advance of a natural or human-induced disaster, aimed at reducing consequences (e.g., recovery time, community suffering, and cost), and/or controlling their likelihood to a level considered acceptable (through the decisionmakers' implicit and explicit acceptance of various risks and tradeoffs). The inoperability input-output model (IIM), which is grounded on Leontief's input/output model, has enabled the modeling of interdependent subsystems. Two separate modeling structures are introduced. These are: phantom system models (PSM), where shared states constitute the essence of modeling coupled systems; and the IIM, where interdependencies among sectors of the economy are manifested by the Leontief matrix of technological coefficients. This article demonstrates the potential contributions of these two models to each other, and thus to more informative modeling of systems of systems schema. The contributions of shared states to this modeling and to systems identification are presented with case studies. © 2012 Society for Risk Analysis.

  9. When causality does not imply correlation: more spadework at the foundations of scientific psychology.

    PubMed

    Marken, Richard S; Horth, Brittany

    2011-06-01

    Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.

  10. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  11. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  12. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  13. Method and apparatus for loss of control inhibitor systems

    NASA Technical Reports Server (NTRS)

    A'Harrah, Ralph C. (Inventor)

    2007-01-01

    Active and adaptive systems and methods to prevent loss of control incidents by providing tactile feedback to a vehicle operator are disclosed. According to the present invention, an operator gives a control input to an inceptor. An inceptor sensor measures an inceptor input value of the control input. The inceptor input is used as an input to a Steady-State Inceptor Input/Effector Output Model that models the vehicle control system design. A desired effector output from the inceptor input is generated from the model. The desired effector output is compared to an actual effector output to get a distortion metric. A feedback force is generated as a function of the distortion metric. The feedback force is used as an input to a feedback force generator which generates a loss of control inhibitor system (LOCIS) force back to the inceptor. The LOCIS force is felt by the operator through the inceptor.

  14. Synaptic control of the shape of the motoneuron pool input-output function

    PubMed Central

    Heckman, Charles J.

    2017-01-01

    Although motoneurons have often been considered to be fairly linear transducers of synaptic input, recent evidence suggests that strong persistent inward currents (PICs) in motoneurons allow neuromodulatory and inhibitory synaptic inputs to induce large nonlinearities in the relation between the level of excitatory input and motor output. To try to estimate the possible extent of this nonlinearity, we developed a pool of model motoneurons designed to replicate the characteristics of motoneuron input-output properties measured in medial gastrocnemius motoneurons in the decerebrate cat with voltage-clamp and current-clamp techniques. We drove the model pool with a range of synaptic inputs consisting of various mixtures of excitation, inhibition, and neuromodulation. We then looked at the relation between excitatory drive and total pool output. Our results revealed that the PICs not only enhance gain but also induce a strong nonlinearity in the relation between the average firing rate of the motoneuron pool and the level of excitatory input. The relation between the total simulated force output and input was somewhat more linear because of higher force outputs in later-recruited units. We also found that the nonlinearity can be increased by increasing neuromodulatory input and/or balanced inhibitory input and minimized by a reciprocal, push-pull pattern of inhibition. We consider the possibility that a flexible input-output function may allow motor output to be tuned to match the widely varying demands of the normal motor repertoire. NEW & NOTEWORTHY Motoneuron activity is generally considered to reflect the level of excitatory drive. However, the activation of voltage-dependent intrinsic conductances can distort the relation between excitatory drive and the total output of a pool of motoneurons. Using a pool of realistic motoneuron models, we show that pool output can be a highly nonlinear function of synaptic input but linearity can be achieved through adjusting the time course of excitatory and inhibitory synaptic inputs. PMID:28053245

  15. Systems and methods for reconfiguring input devices

    NASA Technical Reports Server (NTRS)

    Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)

    2012-01-01

    A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.

  16. Large-Signal Klystron Simulations Using KLSC

    NASA Astrophysics Data System (ADS)

    Carlsten, B. E.; Ferguson, P.

    1997-05-01

    We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.

  17. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations

    PubMed Central

    Fernandez, Fernando R.; Malerba, Paola; White, John A.

    2015-01-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances. PMID:25909971

  18. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations.

    PubMed

    Fernandez, Fernando R; Malerba, Paola; White, John A

    2015-04-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances.

  19. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  20. Using quantum theory to simplify input-output processes

    NASA Astrophysics Data System (ADS)

    Thompson, Jayne; Garner, Andrew J. P.; Vedral, Vlatko; Gu, Mile

    2017-02-01

    All natural things process and transform information. They receive environmental information as input, and transform it into appropriate output responses. Much of science is dedicated to building models of such systems-algorithmic abstractions of their input-output behavior that allow us to simulate how such systems can behave in the future, conditioned on what has transpired in the past. Here, we show that classical models cannot avoid inefficiency-storing past information that is unnecessary for correct future simulation. We construct quantum models that mitigate this waste, whenever it is physically possible to do so. This suggests that the complexity of general input-output processes depends fundamentally on what sort of information theory we use to describe them.

  1. Design of vaccination and fumigation on Host-Vector Model by input-output linearization method

    NASA Astrophysics Data System (ADS)

    Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning

    2017-03-01

    Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.

  2. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  3. Predicting the synaptic information efficacy in cortical layer 5 pyramidal neurons using a minimal integrate-and-fire model.

    PubMed

    London, Michael; Larkum, Matthew E; Häusser, Michael

    2008-11-01

    Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.

  4. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  5. Wrapping Python around MODFLOW/MT3DMS based groundwater models

    NASA Astrophysics Data System (ADS)

    Post, V.

    2008-12-01

    Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.

  6. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  7. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  8. Input-output model for MACCS nuclear accident impacts estimation¹

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less

  9. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  10. Dynamics of nonlinear feedback control.

    PubMed

    Snippe, H P; van Hateren, J H

    2007-05-01

    Feedback control in neural systems is ubiquitous. Here we study the mathematics of nonlinear feedback control. We compare models in which the input is multiplied by a dynamic gain (multiplicative control) with models in which the input is divided by a dynamic attenuation (divisive control). The gain signal (resp. the attenuation signal) is obtained through a concatenation of an instantaneous nonlinearity and a linear low-pass filter operating on the output of the feedback loop. For input steps, the dynamics of gain and attenuation can be very different, depending on the mathematical form of the nonlinearity and the ordering of the nonlinearity and the filtering in the feedback loop. Further, the dynamics of feedback control can be strongly asymmetrical for increment versus decrement steps of the input. Nevertheless, for each of the models studied, the nonlinearity in the feedback loop can be chosen such that immediately after an input step, the dynamics of feedback control is symmetric with respect to increments versus decrements. Finally, we study the dynamics of the output of the control loops and find conditions under which overshoots and undershoots of the output relative to the steady-state output occur when the models are stimulated with low-pass filtered steps. For small steps at the input, overshoots and undershoots of the output do not occur when the filtering in the control path is faster than the low-pass filtering at the input. For large steps at the input, however, results depend on the model, and for some of the models, multiple overshoots and undershoots can occur even with a fast control path.

  11. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  12. The SLH framework for modeling quantum input-output networks

    DOE PAGES

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    2017-09-04

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  13. The SLH framework for modeling quantum input-output networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  14. Modeling of Aircraft Deicing Fluid Induced Biochemical Oxygen Demand in Subsurface-Flow Constructed Treatment Wetlands

    DTIC Science & Technology

    2009-03-01

    meters. The input and output control structures are modeled as sharp crested , rectangular weirs one meter in width. The elevation of the input weir is...manipulated by adjusting the width of both the input and output weirs and the crest height of the output weir . All of these adjustments were found to be...reduction of the weir crest height had an effect on the amount of storm water retained during low precipitation conditions, but not on the crest

  15. Self-tuning multivariable pole placement control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.

    1992-01-01

    This paper presents the design and implementation of a multivariable self-tuning temperature controller for the control of lead bromide crystal growth. The crystal grows inside a multizone transparent furnace. There are eight interacting heating zones shaping the axial temperature distribution inside the furnace. A multi-input, multi-output furnace model is identified on-line by a recursive least squares estimation algorithm. A multivariable pole placement controller based on this model is derived and implemented. Comparison between single-input, single-output and multi-input, multi-output self-tuning controllers demonstrates that the zone-to-zone interactions can be minimized better by a multi-input, multi-output controller design. This directly affects the quality of crystal grown.

  16. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  17. L1 Adaptive Control Augmentation System with Application to the X-29 Lateral/Directional Dynamics: A Multi-Input Multi-Output Approach

    NASA Technical Reports Server (NTRS)

    Griffin, Brian Joseph; Burken, John J.; Xargay, Enric

    2010-01-01

    This paper presents an L(sub 1) adaptive control augmentation system design for multi-input multi-output nonlinear systems in the presence of unmatched uncertainties which may exhibit significant cross-coupling effects. A piecewise continuous adaptive law is adopted and extended for applicability to multi-input multi-output systems that explicitly compensates for dynamic cross-coupling. In addition, explicit use of high-fidelity actuator models are added to the L1 architecture to reduce uncertainties in the system. The L(sub 1) multi-input multi-output adaptive control architecture is applied to the X-29 lateral/directional dynamics and results are evaluated against a similar single-input single-output design approach.

  18. Alpha1 LASSO data bundles Lamont, OK

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)

    2016-08-03

    A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  19. Life and reliability models for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Knorr, R. J.; Coy, J. J.

    1982-01-01

    Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.

  20. Possible effects of depolarizing GABAA conductance on the neuronal input-output relationship: a modeling study.

    PubMed

    Morita, Kenji; Tsumoto, Kunichika; Aihara, Kazuyuki

    2005-06-01

    Recent in vitro experiments revealed that the GABAA reversal potential is about 10 mV higher than the resting potential in mature mammalian neocortical pyramidal cells; thus GABAergic inputs could have facilitatory, rather than inhibitory, effects on action potential generation under certain conditions. However, how the relationship between excitatory input conductances and the output firing rate is modulated by such depolarizing GABAergic inputs under in vivo circumstances has not yet been understood. We examine herewith the input-output relationship in a simple conductance-based model of cortical neurons with the depolarized GABAA reversal potential, and show that a tonic depolarizing GABAergic conductance up to a certain amount does not change the relationship between a tonic glutamatergic driving conductance and the output firing rate, whereas a higher GABAergic conductance prevents spike generation. When the tonic glutamatergic and GABAergic conductances are replaced by in vivo-like highly fluctuating inputs, on the other hand, the effect of depolarizing GABAergic inputs on the input-output relationship critically depends on the degree of coincidence between glutamatergic input events and GABAergic ones. Although a wide range of depolarizing GABAergic inputs hardly changes the firing rate of a neuron driven by noncoincident glutamatergic inputs, a certain range of these inputs considerably decreases the firing rate if a large number of driving glutamatergic inputs are coincident with them. These results raise the possibility that the depolarized GABAA reversal potential is not a paradoxical mystery, but is instead a sophisticated device for discriminative firing rate modulation.

  1. User assessment of smoke-dispersion models for wildland biomass burning.

    Treesearch

    Steve Breyfogle; Sue A. Ferguson

    1996-01-01

    Several smoke-dispersion models, which currently are available for modeling smoke from biomass burns, were evaluated for ease of use, availability of input data, and output data format. The input and output components of all models are listed, and differences in model physics are discussed. Each model was installed and run on a personal computer with a simple-case...

  2. Team performance in the Italian NHS: the role of reflexivity.

    PubMed

    Urbini, Flavio; Callea, Antonino; Chirumbolo, Antonio; Talamo, Alessandra; Ingusci, Emanuela; Ciavolino, Enrico

    2018-04-09

    Purpose The purpose of this paper is twofold: first, to investigate the goodness of the input-process-output (IPO) model in order to evaluate work team performance within the Italian National Health Care System (NHS); and second, to test the mediating role of reflexivity as an overarching process factor between input and output. Design/methodology/approach The Italian version of the Aston Team Performance Inventory was administered to 351 employees working in teams in the Italian NHS. Mediation analyses with latent variables were performed via structural equation modeling (SEM); the significance of total, direct, and indirect effect was tested via bootstrapping. Findings Underpinned by the IPO framework, the results of SEM supported mediational hypotheses. First, the application of the IPO model in the Italian NHS showed adequate fit indices, showing that the process mediates the relationship between input and output factors. Second, reflexivity mediated the relationship between input and output, influencing some aspects of team performance. Practical implications The results provide useful information for HRM policies improving process dimensions of the IPO model via the mediating role of reflexivity as a key role in team performance. Originality/value This study is one of a limited number of studies that applied the IPO model in the Italian NHS. Moreover, no study has yet examined the role of reflexivity as a mediator between input and output factors in the IPO model.

  3. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  4. Theory of optimal information transmission in E. coli chemotaxis pathway

    NASA Astrophysics Data System (ADS)

    Micali, Gabriele; Endres, Robert G.

    Bacteria live in complex microenvironments where they need to make critical decisions fast and reliably. These decisions are inherently affected by noise at all levels of the signaling pathway, and cells are often modeled as an input-output device that transmits extracellular stimuli (input) to internal proteins (channel), which determine the final behavior (output). Increasing the amount of transmitted information between input and output allows cells to better infer extracellular stimuli and respond accordingly. However, in contrast to electronic devices, the separation into input, channel, and output is not always clear in biological systems. Output might feed back into the input, and the channel, made by proteins, normally interacts with the input. Furthermore, a biological channel is affected by mutations and can change under evolutionary pressure. Here, we present a novel approach to maximize information transmission: given cell-external and internal noise, we analytically identify both input distributions and input-output relations that optimally transmit information. Using E. coli chemotaxis as an example, we conclude that its pathway is compatible with an optimal information transmission device despite the ultrasensitive rotary motors.

  5. Classification

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2011-01-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.

  6. Peak expiratory flow profiles delivered by pump systems. Limitations due to wave action.

    PubMed

    Miller, M R; Jones, B; Xu, Y; Pedersen, O F; Quanjer, P H

    2000-06-01

    Pump systems are currently used to test the performance of both spirometers and peak expiratory flow (PEF) meters, but for certain flow profiles the input signal (i.e., requested profile) and the output profile can differ. We developed a mathematical model of wave action within a pump and compared the recorded flow profiles with both the input profiles and the output predicted by the model. Three American Thoracic Society (ATS) flow profiles and four artificial flow-versus-time profiles were delivered by a pump, first to a pneumotachograph (PT) on its own, then to the PT with a 32-cm upstream extension tube (which would favor wave action), and lastly with the PT in series with and immediately downstream to a mini-Wright peak flow meter. With the PT on its own, recorded flow for the seven profiles was 2.4 +/- 1.9% (mean +/- SD) higher than the pump's input flow, and similarly was 2.3 +/- 2.3% higher than the pump's output flow as predicted by the model. With the extension tube in place, the recorded flow was 6.6 +/- 6.4% higher than the input flow (range: 0.1 to 18.4%), but was only 1.2 +/- 2.5% higher than the output flow predicted by the model (range: -0.8 to 5.2%). With the mini-Wright meter in series, the flow recorded by the PT was on average 6.1 +/- 9.1% below the input flow (range: -23.8 to 2. 5%), but was only 0.6 +/- 3.3% above the pump's output flow predicted by the model (range: -5.5 to 3.9%). The mini-Wright meter's reading (corrected for its nonlinearity) was on average 1.3 +/- 3.6% below the model's predicted output flow (range: -9.0 to 1. 5%). The mini-Wright meter would be deemed outside ATS limits for accuracy for three of the seven profiles when compared with the pump's input PEF, but this would be true for only one profile when compared with the pump's output PEF as predicted by the model. Our study shows that the output flow from pump systems can differ from the input waveform depending on the operating configuration. This effect can be predicted with reasonable accuracy using a model based on nonsteady flow analysis that takes account of pressure wave reflections within pump systems.

  7. Analysis of material flow in a utillzation technology of low grade manganese ore and sulphur coal complementary

    NASA Astrophysics Data System (ADS)

    Wang, Bo-Zhi; Deng, Biao; Su, Shi-Jun; Ding, Sang-Lan; Sun, Wei-Yi

    2018-03-01

    Electrolytic manganese is conventionally produced through low-grade manganese ore leaching in SO2, with the combustion of high sulfur coal. Subsequently the coal ash and manganese slag, produced by the combustion of high sulfur coal and preparation of electrolytic manganese, can be used as raw ingredients for the preparation of sulphoaluminate cement. In order to realize the `coal-electricity-sulfur-manganese-building material' system of complementary resource utilization, the conditions of material inflow and outflow in each process were determined using material flow analysis. The material flow models in each unit and process can be obtained by analyzed of material flow for new technology, and the input-output model could be obtained. Through the model, it is possible to obtain the quantity of all the input and output material in the condition of limiting the quantity of a substance. Taking one ton electrolytic manganese as a basis, the quantity of other input material and cements can be determined with the input-output model. The whole system had thusly achieved a cleaner production level. Therefore, the input-output model can be used for guidance in practical production.

  8. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  9. Linear and quadratic models of point process systems: contributions of patterned input to output.

    PubMed

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. A note on scrap in the 1992 U.S. input-output tables

    USGS Publications Warehouse

    Swisko, George M.

    2000-01-01

    Introduction A key concern of industrial ecology and life cycle analysis is the disposal and recycling of scrap. One might conclude that the U.S. input-output tables are appropriate tools for analyzing scrap flows. Duchin, for instance, has suggested using input-output analysis for industrial ecology, indicating that input-output economics can trace the stocks and flows of energy and other materials from extraction through production and consumption to recycling or disposal. Lave and others use input-output tables to design life cycle assessment models for studying product design, materials use, and recycling strategies, even with the knowledge that these tables suffer from a lack of comprehensive and detailed data that may never be resolved. Although input-output tables can offer general guidance about the interdependence of economic and environmental processes, data reporting by industry and the economic concepts underlying these tables pose problems for rigorous material flow examinations. This is especially true for analyzing the output of scrap and scrap flows in the United States and estimating the amount of scrap that can be recycled. To show how data reporting has affected the values of scrap in recent input-output tables, this paper focuses on metal scrap generated in manufacturing. The paper also briefly discusses scrap that is not included in the input-output tables and some economic concepts that limit the analysis of scrap flows.

  11. The UK waste input-output table: Linking waste generation to the UK economy.

    PubMed

    Salemdeeb, Ramy; Al-Tabbaa, Abir; Reynolds, Christian

    2016-10-01

    In order to achieve a circular economy, there must be a greater understanding of the links between economic activity and waste generation. This study introduces the first version of the UK waste input-output table that could be used to quantify both direct and indirect waste arisings across the supply chain. The proposed waste input-output table features 21 industrial sectors and 34 waste types and is for the 2010 time-period. Using the waste input-output table, the study results quantitatively confirm that sectors with a long supply chain (i.e. manufacturing and services sectors) have higher indirect waste generation rates compared with industrial primary sectors (e.g. mining and quarrying) and sectors with a shorter supply chain (e.g. construction). Results also reveal that the construction, mining and quarrying sectors have the highest waste generation rates, 742 and 694 tonne per £1m of final demand, respectively. Owing to the aggregated format of the first version of the waste input-output, the model does not address the relationship between waste generation and recycling activities. Therefore, an updated version of the waste input-output table is expected be developed considering this issue. Consequently, the expanded model would lead to a better understanding of waste and resource flows in the supply chain. © The Author(s) 2016.

  12. Characteristic operator functions for quantum input-plant-output models and coherent control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gough, John E.

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entriesmore » that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.« less

  13. Balancing the stochastic description of uncertainties as a function of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.

    2016-12-01

    Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.

  14. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  15. Robust decentralized controller for minimizing coupling effect in single inductor multiple output DC-DC converter operating in continuous conduction mode.

    PubMed

    Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das

    2018-02-01

    This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  17. The potential of different artificial neural network (ANN) techniques in daily global solar radiation modeling based on meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.

    2010-08-15

    The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less

  18. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  19. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  20. Educational Resource Multipliers for Use in Local Public Finance: An Input-Output Approach.

    ERIC Educational Resources Information Center

    Boardman, A. E.; Schinnar, A. P.

    1982-01-01

    Develops an input-output model, with related multipliers, showing how changes in earmarked and discretionary educational funds (whether local, state, or federal) affect all of a state's districts and educational programs. Illustrates the model with Pennsylvania data and relates it to the usual educational finance approach, which uses demand…

  1. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  2. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  3. The economic impacts of Lake States forestry: an input-output study.

    Treesearch

    Larry Pedersen; Daniel E. Chappelle; David C. Lothner

    1989-01-01

    The report describes 1985 and 1995 levels of forest-related economic activity in the three-state area of Michigan, Minnesota, and Wisconsin, and their impacts on other economic sectors based on a regional input-output model.

  4. Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.

    ERIC Educational Resources Information Center

    Simpson, William A.

    In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…

  5. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  6. Model-free adaptive control of supercritical circulating fluidized-bed boilers

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L

    2014-12-16

    A novel 3-Input-3-Output (3.times.3) Fuel-Air Ratio Model-Free Adaptive (MFA) controller is introduced, which can effectively control key process variables including Bed Temperature, Excess O2, and Furnace Negative Pressure of combustion processes of advanced boilers. A novel 7-input-7-output (7.times.7) MFA control system is also described for controlling a combined 3-Input-3-Output (3.times.3) process of Boiler-Turbine-Generator (BTG) units and a 5.times.5 CFB combustion process of advanced boilers. Those boilers include Circulating Fluidized-Bed (CFB) Boilers and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.

  7. MIMO system identification using frequency response data

    NASA Technical Reports Server (NTRS)

    Medina, Enrique A.; Irwin, R. D.; Mitchell, Jerrel R.; Bukley, Angelia P.

    1992-01-01

    A solution to the problem of obtaining a multi-input, multi-output statespace model of a system from its individual input/output frequency responses is presented. The Residue Identification Algorithm (RID) identifies the system poles from a transfer function model of the determinant of the frequency response data matrix. Next, the residue matrices of the modes are computed guaranteeing that each input/output frequency response is fitted in the least squares sense. Finally, a realization of the system is computed. Results of the application of RID to experimental frequency responses of a large space structure ground test facility are presented and compared to those obtained via the Eigensystem Realization Algorithm.

  8. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  9. Smad Signaling Dynamics: Insights from a Parsimonious Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, H. S.; Shankaran, Harish

    2008-09-09

    The molecular mechanisms that transmit information from cell surface receptors to the nucleus are exceedingly complex; thus, much effort has been expended in developing computational models to understand these processes. A recent study on modeling the nuclear-cytoplasmic shuttling of Smad2-Smad4 complexes in response to transforming growth factor β (TGF-β) receptor activation has provided substantial insight into how this signaling network translates the degree of TGF-β receptor activation (input) into the amount of nuclear Smad2-Smad4 complexes (output). The study addressed this question by combining a simple, mechanistic model with targeted experiments, an approach that proved particularly powerful for exploring the fundamentalmore » properties of a complex signaling network. The mathematical model revealed that Smad nuclear-cytoplasmic dynamics enables a proportional, but time-delayed coupling between the input and the output. As a result, the output can faithfully track gradual changes in the input, while the rapid input fluctuations that constitute signaling noise are dampened out.« less

  10. Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blankenship, Doug; Sonnenthal, Eric

    Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.

  11. Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction

    DTIC Science & Technology

    2016-02-25

    Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction We have completed a short program of theoretical research...on dimensional reduction and approximation of models based on quantum stochastic differential equations. Our primary results lie in the area of...2211 quantum probability, quantum stochastic differential equations REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR

  12. Identifiability Results for Several Classes of Linear Compartment Models.

    PubMed

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  13. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy.

    PubMed

    Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A

    2016-11-23

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  14. Further results on open-loop compensation of rate-dependent hysteresis in a magnetostrictive actuator with the Prandtl-Ishlinskii model

    NASA Astrophysics Data System (ADS)

    Al Janaideh, Mohammad; Aljanaideh, Omar

    2018-05-01

    Apart from the output-input hysteresis loops, the magnetostrictive actuators also exhibit asymmetry and saturation, particularly under moderate to large magnitude inputs and at relatively higher frequencies. Such nonlinear input-output characteristics could be effectively characterized by a rate-dependent Prandtl-Ishlinskii model in conjunction with a function of deadband operators. In this study, an inverse model is formulated to seek real-time compensation of rate-dependent and asymmetric hysteresis nonlinearities of a Terfenol-D magnetostrictive actuator. The inverse model is formulated with the inverse of the rate-dependent Prandtl-Ishlinskii model, satisfying the threshold dilation condition, with the inverse of the deadband function. The inverse model was subsequently applied to the hysteresis model as a feedforward compensator. The proposed compensator is applied as a feedforward compensator to the actuator hardware to study its potential for rate-dependent and asymmetric hysteresis loops. The experimental results are obtained under harmonic and complex harmonic inputs further revealed that the inverse compensator can substantially suppress the hysteresis and output asymmetry nonlinearities in the entire frequency range considered in the study.

  15. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    NASA Astrophysics Data System (ADS)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  16. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  17. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.

    PubMed

    Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.

  18. Analysis of inter-country input-output table based on bibliographic coupling network: How industrial sectors on the GVC compete for production resources

    NASA Astrophysics Data System (ADS)

    Guan, Jun; Xu, Xiaoyu; Xing, Lizhi

    2018-03-01

    The input-output table is comprehensive and detailed in describing national economic systems with abundance of economic relationships depicting information of supply and demand among industrial sectors. This paper focuses on how to quantify the degree of competition on the global value chain (GVC) from the perspective of econophysics. Global Industrial Strongest Relevant Network models are established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output (ICIO) tables and then have them transformed into Global Industrial Resource Competition Network models to analyze the competitive relationships based on bibliographic coupling approach. Three indicators well suited for the weighted and undirected networks with self-loops are introduced here, including unit weight for competitive power, disparity in the weight for competitive amplitude and weighted clustering coefficient for competitive intensity. Finally, these models and indicators were further applied empirically to analyze the function of industrial sectors on the basis of the latest World Input-Output Database (WIOD) in order to reveal inter-sector competitive status during the economic globalization.

  19. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  20. Sensitivity analysis of a short distance atmospheric dispersion model applied to the Fukushima disaster

    NASA Astrophysics Data System (ADS)

    Périllat, Raphaël; Girard, Sylvain; Korsakissok, Irène; Mallet, Vinien

    2015-04-01

    In a previous study, the sensitivity of a long distance model was analyzed on the Fukushima Daiichi disaster case with the Morris screening method. It showed that a few variables, such as horizontal diffusion coefficient or clouds thickness, have a weak influence on most of the chosen outputs. The purpose of the present study is to apply a similar methodology on the IRSN's operational short distance atmospheric dispersion model, called pX. Atmospheric dispersion models are very useful in case of accidental releases of pollutant to minimize the population exposure during the accident and to obtain an accurate assessment of short and long term environmental and sanitary impact. Long range models are mostly used for consequences assessment while short range models are more adapted to the early phases of the crisis and are used to make prognosis. The Morris screening method was used to estimate the sensitivity of a set of outputs and to rank the inputs by their influences. The input ranking is highly dependent on the considered output, but a few variables seem to have a weak influence on most of them. This first step revealed that interactions and non-linearity are much more pronounced with the short range model than with the long range one. Afterward, the Sobol screening method was used to obtain more quantitative results on the same set of outputs. Using this method was possible for the short range model because it is far less computationally demanding than the long range model. The study also confronts two parameterizations, Doury's and Pasquill's models, to contrast their behavior. The Doury's model seems to excessively inflate the influence of some inputs compared to the Pasquill's model, such as the altitude of emission and the air stability which do not have the same role in the two models. The outputs of the long range model were dominated by only a few inputs. On the contrary, in this study the influence is shared more evenly between the inputs.

  1. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  2. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  3. Input-output characterization of an ultrasonic testing system by digital signal analysis

    NASA Technical Reports Server (NTRS)

    Karaguelle, H.; Lee, S. S.; Williams, J., Jr.

    1984-01-01

    The input/output characteristics of an ultrasonic testing system used for stress wave factor measurements were studied. The fundamentals of digital signal processing are summarized. The inputs and outputs are digitized and processed in a microcomputer using digital signal processing techniques. The entire ultrasonic test system, including transducers and all electronic components, is modeled as a discrete-time linear shift-invariant system. Then the impulse response and frequency response of the continuous time ultrasonic test system are estimated by interpolating the defining points in the unit sample response and frequency response of the discrete time system. It is found that the ultrasonic test system behaves as a linear phase bandpass filter. Good results were obtained for rectangular pulse inputs of various amplitudes and durations and for tone burst inputs whose center frequencies are within the passband of the test system and for single cycle inputs of various amplitudes. The input/output limits on the linearity of the system are determined.

  4. Multi input single output model predictive control of non-linear bio-polymerization process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugasamy, Senthil Kumar; Ahmad, Z.

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state spacemore » model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.« less

  5. A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.

    PubMed

    Eikenberry, Steffen E; Marmarelis, Vasilis Z

    2013-02-01

    We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.

  6. Documentation of model input and output values for simulation of pumping effects in Paradise Valley, a basin tributary to the Humboldt River, Humboldt County, Nevada

    USGS Publications Warehouse

    Carey, A.E.; Prudic, David E.

    1996-01-01

    Documentation is provided of model input and sample output used in a previous report for analysis of ground-water flow and simulated pumping scenarios in Paradise Valley, Humboldt County, Nevada.Documentation includes files containing input values and listings of sample output. The files, in American International Standard Code for Information Interchange (ASCII) or binary format, are compressed and put on a 3-1/2-inch diskette. The decompressed files require approximately 8.4 megabytes of disk space on an International Business Machine (IBM)- compatible microcomputer using the MicroSoft Disk Operating System (MS-DOS) operating system version 5.0 or greater.

  7. Identification and modeling of the electrohydraulic systems of the main gun of a main battle tank

    NASA Astrophysics Data System (ADS)

    Campos, Luiz C. A.; Menegaldo, Luciano L.

    2012-11-01

    The black-box mathematical models of the electrohydraulic systems responsible for driving the two degrees of freedom (elevation and azimuth) of the main gun of a main battle tank (MBT) were identified. Such systems respond to gunner's inputs while acquiring and tracking targets. Identification experiments were designed to collect simultaneous data from two inertial measurement units (IMU) installed at the gunner's handle (input) and at the center of rotation of the turret (output), for the identification of the azimuth system. For the elevation system, IMUs were installed at the gunner's handle (input) and at the breech of the gun (output). Linear accelerations and angular rates were collected for both input and output. Several black-box model architectures were investigated. As a result, nonlinear autoregressive with exogenous variables (NARX) second order model and nonlinear finite impulse response (NFIR) fourth order model, demonstrate to best fit the experimental data, with low computational costs. The derived models are being employed in a broader research, aiming to reproduce such systems in a laboratory virtual main gun simulator.

  8. Emulation and Sensitivity Analysis of the Community Multiscale Air Quality Model for a UK Ozone Pollution Episode.

    PubMed

    Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D

    2017-06-06

    Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.

  9. A Hippocampal Cognitive Prosthesis: Multi-Input, Multi-Output Nonlinear Modeling and VLSI Implementation

    PubMed Central

    Berger, Theodore W.; Song, Dong; Chan, Rosa H. M.; Marmarelis, Vasilis Z.; LaCoss, Jeff; Wills, Jack; Hampson, Robert E.; Deadwyler, Sam A.; Granacki, John J.

    2012-01-01

    This paper describes the development of a cognitive prosthesis designed to restore the ability to form new long-term memories typically lost after damage to the hippocampus. The animal model used is delayed nonmatch-to-sample (DNMS) behavior in the rat, and the “core” of the prosthesis is a biomimetic multi-input/multi-output (MIMO) nonlinear model that provides the capability for predicting spatio-temporal spike train output of hippocampus (CA1) based on spatio-temporal spike train inputs recorded presynaptically to CA1 (e.g., CA3). We demonstrate the capability of the MIMO model for highly accurate predictions of CA1 coded memories that can be made on a single-trial basis and in real-time. When hippocampal CA1 function is blocked and long-term memory formation is lost, successful DNMS behavior also is abolished. However, when MIMO model predictions are used to reinstate CA1 memory-related activity by driving spatio-temporal electrical stimulation of hippocampal output to mimic the patterns of activity observed in control conditions, successful DNMS behavior is restored. We also outline the design in very-large-scale integration for a hardware implementation of a 16-input, 16-output MIMO model, along with spike sorting, amplification, and other functions necessary for a total system, when coupled together with electrode arrays to record extracellularly from populations of hippocampal neurons, that can serve as a cognitive prosthesis in behaving animals. PMID:22438335

  10. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. System/observer/controller identification toolbox

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Horta, Lucas G.; Phan, Minh

    1992-01-01

    System Identification is the process of constructing a mathematical model from input and output data for a system under testing, and characterizing the system uncertainties and measurement noises. The mathematical model structure can take various forms depending upon the intended use. The SYSTEM/OBSERVER/CONTROLLER IDENTIFICATION TOOLBOX (SOCIT) is a collection of functions, written in MATLAB language and expressed in M-files, that implements a variety of modern system identification techniques. For an open loop system, the central features of the SOCIT are functions for identification of a system model and its corresponding forward and backward observers directly from input and output data. The system and observers are represented by a discrete model. The identified model and observers may be used for controller design of linear systems as well as identification of modal parameters such as dampings, frequencies, and mode shapes. For a closed-loop system, an observer and its corresponding controller gain directly from input and output data.

  12. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    NASA Astrophysics Data System (ADS)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  13. Dynamic adjustment in agricultural practices to economic incentives aiming to decrease fertilizer application.

    PubMed

    Sun, Shanxia; Delgado, Michael S; Sesmero, Juan P

    2016-07-15

    Input- and output-based economic policies designed to reduce water pollution from fertilizer runoff by adjusting management practices are theoretically justified and well-understood. Yet, in practice, adjustment in fertilizer application or land allocation may be sluggish. We provide practical guidance for policymakers regarding the relative magnitude and speed of adjustment of input- and output-based policies. Through a dynamic dual model of corn production that takes fertilizer as one of several production inputs, we measure the short- and long-term effects of policies that affect the relative prices of inputs and outputs through the short- and long-term price elasticities of fertilizer application, and also the total time required for different policies to affect fertilizer application through the adjustment rates of capital and land. These estimates allow us to compare input- and output-based policies based on their relative cost-effectiveness. Using data from Indiana and Illinois, we find that input-based policies are more cost-effective than their output-based counterparts in achieving a target reduction in fertilizer application. We show that input- and output-based policies yield adjustment in fertilizer application at the same speed, and that most of the adjustment takes place in the short-term. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Modifications of the U.S. Geological Survey modular, finite-difference, ground-water flow model to read and write geographic information system files

    USGS Publications Warehouse

    Orzol, Leonard L.; McGrath, Timothy S.

    1992-01-01

    This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.

  15. User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs

    USGS Publications Warehouse

    Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.

    2008-01-01

    This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.

  16. Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.

    2008-12-01

    The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.

  17. User guide for MODPATH version 6 - A particle-tracking model for MODFLOW

    USGS Publications Warehouse

    Pollock, David W.

    2012-01-01

    MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.

  18. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  19. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  20. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  1. From Spiking Neuron Models to Linear-Nonlinear Models

    PubMed Central

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-01

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777

  2. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  3. Stiffness modeling of compliant parallel mechanisms and applications in the performance analysis of a decoupled parallel compliant stage

    NASA Astrophysics Data System (ADS)

    Jiang, Yao; Li, Tie-Min; Wang, Li-Ping

    2015-09-01

    This paper investigates the stiffness modeling of compliant parallel mechanism (CPM) based on the matrix method. First, the general compliance matrix of a serial flexure chain is derived. The stiffness modeling of CPMs is next discussed in detail, considering the relative positions of the applied load and the selected displacement output point. The derived stiffness models have simple and explicit forms, and the input, output, and coupling stiffness matrices of the CPM can easily be obtained. The proposed analytical model is applied to the stiffness modeling and performance analysis of an XY parallel compliant stage with input and output decoupling characteristics. Then, the key geometrical parameters of the stage are optimized to obtain the minimum input decoupling degree. Finally, a prototype of the compliant stage is developed and its input axial stiffness, coupling characteristics, positioning resolution, and circular contouring performance are tested. The results demonstrate the excellent performance of the compliant stage and verify the effectiveness of the proposed theoretical model. The general stiffness models provided in this paper will be helpful for performance analysis, especially in determining coupling characteristics, and the structure optimization of the CPM.

  4. User Manual for SAHM package for VisTrails

    USGS Publications Warehouse

    Talbert, C.B.; Talbert, M.K.

    2012-01-01

    The Software for Assisted Habitat I\\•1odeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre-and post-processing steps and modeling options incorporated in the construction of a species distribution model. The four main advantages to using the combined VisTrail: SAHM package for species distribution modeling are: 1. formalization and tractable recording of the entire modeling process 2. easier collaboration through a common modeling framework 3. a user-friendly graphical interface to manage file input, model runs, and output 4. extensibility to incorporate future and additional modeling routines and tools. This user manual provides detailed information on each module within the SAHM package, their input, output, common connections, optional arguments, and default settings. This information can also be accessed for individual modules by right clicking on the documentation button for any module in VisTrail or by right clicking on any input or output for a module and selecting view documentation. This user manual is intended to accompany the user guide which provides detailed instructions on how to install the SAHM package within VisTrails and then presents information on the use of the package.

  5. Two models for identification and predicting behaviour of an induction motor system

    NASA Astrophysics Data System (ADS)

    Kuo, Chien-Hsun

    2018-01-01

    System identification or modelling is the process of building mathematical models of dynamical systems based on the available input and output data from the systems. This paper introduces system identification by using ARX (Auto Regressive with eXogeneous input) and ARMAX (Auto Regressive Moving Average with eXogeneous input) models. Through the identified system model, the predicted output could be compared with the measured one to help prevent the motor faults from developing into a catastrophic machine failure and avoid unnecessary costs and delays caused by the need to carry out unscheduled repairs. The induction motor system is illustrated as an example. Numerical and experimental results are shown for the identified induction motor system.

  6. User's guide for a large signal computer model of the helical traveling wave tube

    NASA Technical Reports Server (NTRS)

    Palmer, Raymond W.

    1992-01-01

    The use is described of a successful large-signal, two-dimensional (axisymmetric), deformable disk computer model of the helical traveling wave tube amplifier, an extensively revised and operationally simplified version. We also discuss program input and output and the auxiliary files necessary for operation. Included is a sample problem and its input data and output results. Interested parties may now obtain from the author the FORTRAN source code, auxiliary files, and sample input data on a standard floppy diskette, the contents of which are described herein.

  7. Integrated Geothermal-CO2 Storage Reservoirs: FY1 Final Report

    DOE Data Explorer

    Buscheck, Thomas A.

    2012-01-01

    The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  8. Competition model for aperiodic stochastic resonance in a Fitzhugh-Nagumo model of cardiac sensory neurons.

    PubMed

    Kember, G C; Fenton, G A; Armour, J A; Kalyaniwalla, N

    2001-04-01

    Regional cardiac control depends upon feedback of the status of the heart from afferent neurons responding to chemical and mechanical stimuli as transduced by an array of sensory neurites. Emerging experimental evidence shows that neural control in the heart may be partially exerted using subthreshold inputs that are amplified by noisy mechanical fluctuations. This amplification is known as aperiodic stochastic resonance (ASR). Neural control in the noisy, subthreshold regime is difficult to see since there is a near absence of any correlation between input and the output, the latter being the average firing (spiking) rate of the neuron. This lack of correlation is unresolved by traditional energy models of ASR since these models are unsuitable for identifying "cause and effect" between such inputs and outputs. In this paper, the "competition between averages" model is used to determine what portion of a noisy, subthreshold input is responsible, on average, for the output of sensory neurons as represented by the Fitzhugh-Nagumo equations. A physiologically relevant conclusion of this analysis is that a nearly constant amount of input is responsible for a spike, on average, and this amount is approximately independent of the firing rate. Hence, correlation measures are generally reduced as the firing rate is lowered even though neural control under this model is actually unaffected.

  9. User's manual for a parameter identification technique. [with options for model simulation for fixed input forcing functions and identification from wind tunnel and flight measurements

    NASA Technical Reports Server (NTRS)

    Kanning, G.

    1975-01-01

    A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.

  10. An artificial neural network model for periodic trajectory generation

    NASA Astrophysics Data System (ADS)

    Shankar, S.; Gander, R. E.; Wood, H. C.

    A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.

  11. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  12. The Flow Engine Framework: A Cognitive Model of Optimal Human Experience

    PubMed Central

    Šimleša, Milija; Guegan, Jérôme; Blanchard, Edouard; Tarpin-Bernard, Franck; Buisine, Stéphanie

    2018-01-01

    Flow is a well-known concept in the fields of positive and applied psychology. Examination of a large body of flow literature suggests there is a need for a conceptual model rooted in a cognitive approach to explain how this psychological phenomenon works. In this paper, we propose the Flow Engine Framework, a theoretical model explaining dynamic interactions between rearranged flow components and fundamental cognitive processes. Using an IPO framework (Inputs – Processes – Outputs) including a feedback process, we organize flow characteristics into three logically related categories: inputs (requirements for flow), mediating and moderating cognitive processes (attentional and motivational mechanisms) and outputs (subjective and objective outcomes), describing the process of the flow. Comparing flow with an engine, inputs are depicted as flow-fuel, core processes cylinder strokes and outputs as power created to provide motion. PMID:29899807

  13. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain

    PubMed Central

    2017-01-01

    The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization. PMID:28873432

  14. Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem

    2015-06-01

    Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.

  15. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain.

    PubMed

    Xing, Lizhi

    2017-01-01

    The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization.

  16. Neural network system for purposeful behavior based on foveal visual preprocessor

    NASA Astrophysics Data System (ADS)

    Golovan, Alexander V.; Shevtsova, Natalia A.; Klepatch, Arkadi A.

    1996-10-01

    Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.

  17. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Alpha 2 LASSO Data Bundles

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi

    2015-08-31

    The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  19. A computer program for simulating geohydrologic systems in three dimensions

    USGS Publications Warehouse

    Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.

    1980-01-01

    This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)

  20. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  1. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.

  2. Regularized maximum pure-state input-output fidelity of a quantum channel

    NASA Astrophysics Data System (ADS)

    Ernst, Moritz F.; Klesse, Rochus

    2017-12-01

    As a toy model for the capacity problem in quantum information theory we investigate finite and asymptotic regularizations of the maximum pure-state input-output fidelity F (N ) of a general quantum channel N . We show that the asymptotic regularization F ˜(N ) is lower bounded by the maximum output ∞ -norm ν∞(N ) of the channel. For N being a Pauli channel, we find that both quantities are equal.

  3. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  4. Networked iterative learning control design for discrete-time systems with stochastic communication delay in input and output channels

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ruan, Xiaoe

    2017-07-01

    This paper develops two kinds of derivative-type networked iterative learning control (NILC) schemes for repetitive discrete-time systems with stochastic communication delay occurred in input and output channels and modelled as 0-1 Bernoulli-type stochastic variable. In the two schemes, the delayed signal of the current control input is replaced by the synchronous input utilised at the previous iteration, whilst for the delayed signal of the system output the one scheme substitutes it by the synchronous predetermined desired trajectory and the other takes it by the synchronous output at the previous operation, respectively. In virtue of the mathematical expectation, the tracking performance is analysed which exhibits that for both the linear time-invariant and nonlinear affine systems the two kinds of NILCs are convergent under the assumptions that the probabilities of communication delays are adequately constrained and the product of the input-output coupling matrices is full-column rank. Last, two illustrative examples are presented to demonstrate the effectiveness and validity of the proposed NILC schemes.

  5. Acute Radiation Risk and BRYNTRN Organ Dose Projection Graphical User Interface

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Hu, Shaowen; Nounu, Hateni N.; Kim, Myung-Hee

    2011-01-01

    The integration of human space applications risk projection models of organ dose and acute radiation risk has been a key problem. NASA has developed an organ dose projection model using the BRYNTRN with SUM DOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUM DOSE are a Baryon transport code and an output data processing code, respectively. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN. A GUI for the ARR and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. BRYNTRN code operation requires extensive input preparation. Only a graphical user interface (GUI) can handle input and output for BRYNTRN to the response models easily and correctly. The purpose of the GUI development for ARRBOD is to provide seamless integration of input and output manipulations for the operations of projection modules (BRYNTRN, SLMDOSE, and the ARR probabilistic response model) in assessing the acute risk and the organ doses of significant Solar Particle Events (SPEs). The assessment of astronauts radiation risk from SPE is in support of mission design and operational planning to manage radiation risks in future space missions. The ARRBOD GUI can identify the proper shielding solutions using the gender-specific organ dose assessments in order to avoid ARR symptoms, and to stay within the current NASA short-term dose limits. The quantified evaluation of ARR severities based on any given shielding configuration and a specified EVA or other mission scenario can be made to guide alternative solutions for attaining determined objectives set by mission planners. The ARRBOD GUI estimates the whole-body effective dose, organ doses, and acute radiation sickness symptoms for astronauts, by which operational strategies and capabilities can be made for the protection of astronauts from SPEs in the planning of future lunar surface scenarios, exploration of near-Earth objects, and missions to Mars.

  6. Adaptive Identification of Fluid-Dynamic Systems

    DTIC Science & Technology

    2001-06-14

    Fig. 1. Unknown System Adaptive Filter Σ _ + Input u Filter Output y Desired Output d Error e Fig. 1. Modeling of a SISO system using...2J E e n =   (12) Here [ ]. E is the expectation operator and ( ) ( ) ( ) e n d n y n= − is the error between the desired system output and...B … input vector ( ) ( ) ( ) ( )[ ], , ,1 1 Tn u n u n u n N= − − +U … output and error ( ) ( ) ( ) ( ) ( ) ( ) ( ) T T y n n n e n d n n n

  7. Coordination of heterogeneous nonlinear multi-agent systems with prescribed behaviours

    NASA Astrophysics Data System (ADS)

    Tang, Yutao

    2017-10-01

    In this paper, we consider a coordination problem for a class of heterogeneous nonlinear multi-agent systems with a prescribed input-output behaviour which was represented by another input-driven system. In contrast to most existing multi-agent coordination results with an autonomous (virtual) leader, this formulation takes possible control inputs of the leader into consideration. First, the coordination was achieved by utilising a group of distributed observers based on conventional assumptions of model matching problem. Then, a fully distributed adaptive extension was proposed without using the input of this input-output behaviour. An example was given to verify their effectiveness.

  8. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  9. The economic effect of a physician assistant or nurse practitioner in rural America.

    PubMed

    Eilrich, Fred C

    2016-10-01

    Revenues generated by physician assistants (PAs) and NPs in clinics and hospitals create employment opportunities and wages, salaries, and benefits for staff, which in turn are circulated throughout the local economy. An input-output model was used to estimate the direct and secondary effects of a rural primary care PA or NP on the community and surrounding area. This type of model explains how input/output from one sector of industry can be the output/input for another sector. Given two example scenarios, a rural PA or NP can have an employment effect of 4.4 local jobs and labor income of $280,476 from the clinic. The total effect to a community with a hospital increases to 18.5 local jobs and $940,892 of labor income.

  10. Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles

    PubMed Central

    Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.

    2009-01-01

    The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382

  11. State-Space System Realization with Input- and Output-Data Correlation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    1997-01-01

    This paper introduces a general version of the information matrix consisting of the autocorrelation and cross-correlation matrices of the shifted input and output data. Based on the concept of data correlation, a new system realization algorithm is developed to create a model directly from input and output data. The algorithm starts by computing a special type of correlation matrix derived from the information matrix. The special correlation matrix provides information on the system-observability matrix and the state-vector correlation. A system model is then developed from the observability matrix in conjunction with other algebraic manipulations. This approach leads to several different algorithms for computing system matrices for use in representing the system model. The relationship of the new algorithms with other realization algorithms in the time and frequency domains is established with matrix factorization of the information matrix. Several examples are given to illustrate the validity and usefulness of these new algorithms.

  12. Development of a Distributed Parallel Computing Framework to Facilitate Regional/Global Gridded Crop Modeling with Various Scenarios

    NASA Astrophysics Data System (ADS)

    Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.

    2017-12-01

    Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.

  13. A controls engineering approach for analyzing airplane input-output characteristics

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  14. Feedforward Inhibition Allows Input Summation to Vary in Recurrent Cortical Networks

    PubMed Central

    2018-01-01

    Abstract Brain computations depend on how neurons transform inputs to spike outputs. Here, to understand input-output transformations in cortical networks, we recorded spiking responses from visual cortex (V1) of awake mice of either sex while pairing sensory stimuli with optogenetic perturbation of excitatory and parvalbumin-positive inhibitory neurons. We found that V1 neurons’ average responses were primarily additive (linear). We used a recurrent cortical network model to determine whether these data, as well as past observations of nonlinearity, could be described by a common circuit architecture. Simulations showed that cortical input-output transformations can be changed from linear to sublinear with moderate (∼20%) strengthening of connections between inhibitory neurons, but this change away from linear scaling depends on the presence of feedforward inhibition. Simulating a variety of recurrent connection strengths showed that, compared with when input arrives only to excitatory neurons, networks produce a wider range of output spiking responses in the presence of feedforward inhibition. PMID:29682603

  15. Input-output characterization of fiber reinforced composites by P waves

    NASA Technical Reports Server (NTRS)

    Renneisen, John D.; Williams, James H., Jr.

    1990-01-01

    Input-output characterization of fiber composites is studied theoretically by tracing P waves in the media. A new path motion to aid in the tracing of P and the reflection generated SV wave paths in the continuum plate is developed. A theoretical output voltage from the receiving transducer is calculated for a tone burst. The study enhances the quantitative and qualitative understanding of the nondestructive evaluation of fiber composites which can be modeled as transversely isotropic media.

  16. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  17. Emulating RRTMG Radiation with Deep Neural Networks for the Accelerated Model for Climate and Energy

    NASA Astrophysics Data System (ADS)

    Pal, A.; Norman, M. R.

    2017-12-01

    The RRTMG radiation scheme in the Accelerated Model for Climate and Energy Multi-scale Model Framework (ACME-MMF), is a bottleneck and consumes approximately 50% of the computational time. To simulate a case using RRTMG radiation scheme in ACME-MMF with high throughput and high resolution will therefore require a speed-up of this calculation while retaining physical fidelity. In this study, RRTMG radiation is emulated with Deep Neural Networks (DNNs). The first step towards this goal is to run a case with ACME-MMF and generate input data sets for the DNNs. A principal component analysis of these input data sets are carried out. Artificial data sets are created using the previous data sets to cover a wider space. These artificial data sets are used in a standalone RRTMG radiation scheme to generate outputs in a cost effective manner. These input-output pairs are used to train multiple architectures DNNs(1). Another DNN(2) is trained using the inputs to predict the error. A reverse emulation is trained to map the output to input. An error controlled code is developed with the two DNNs (1 and 2) and will determine when/if the original parameterization needs to be used.

  18. Model predictive control of the solid oxide fuel cell stack temperature with models based on experimental data

    NASA Astrophysics Data System (ADS)

    Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari

    2015-03-01

    Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.

  19. The reservoir model: a differential equation model of psychological regulation.

    PubMed

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  20. The Reservoir Model: A Differential Equation Model of Psychological Regulation

    PubMed Central

    Deboeck, Pascal R.; Bergeman, C. S.

    2017-01-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might “add up” over time (e.g., life stressors, inputs), but individuals simultaneously take action to “blow off steam” (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the “height” (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. PMID:23527605

  1. The Relationship between Creative Personality Composition, Innovative Team Climate, and Team Innovativeness: An Input-Process-Output Perspective

    ERIC Educational Resources Information Center

    Mathisen, Gro Ellen; Martinsen, Oyvind; Einarsen, Stale

    2008-01-01

    This study investigates the relationship between creative personality composition, innovative team climate, and team innovation based on an input-process-output model. We measured personality with the Creative Person Profile, team climate with the Team Climate Inventory, and team innovation through team-member and supervisor reports of team…

  2. INDES User's guide multistep input design with nonlinear rotorcraft modeling

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.

  3. Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations

    DOE Data Explorer

    Buscheck, Thomas A.

    2012-01-01

    Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk : FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  4. Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations

    DOE Data Explorer

    Buscheck, Thomas A.

    2000-01-01

    Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk: FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  5. Dynamic modeling and parameter estimation of a radial and loop type distribution system network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun Qui; Heng Chen; Girgis, A.A.

    1993-05-01

    This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.

  6. Adaptive Control via Neural Output Feedback for a Class of Nonlinear Discrete-Time Systems in a Nested Interconnected Form.

    PubMed

    Li, Dong-Juan; Li, Da-Peng

    2017-09-14

    In this paper, an adaptive output feedback control is framed for uncertain nonlinear discrete-time systems. The considered systems are a class of multi-input multioutput nonaffine nonlinear systems, and they are in the nested lower triangular form. Furthermore, the unknown dead-zone inputs are nonlinearly embedded into the systems. These properties of the systems will make it very difficult and challenging to construct a stable controller. By introducing a new diffeomorphism coordinate transformation, the controlled system is first transformed into a state-output model. By introducing a group of new variables, an input-output model is finally obtained. Based on the transformed model, the implicit function theorem is used to determine the existence of the ideal controllers and the approximators are employed to approximate the ideal controllers. By using the mean value theorem, the nonaffine functions of systems can become an affine structure but nonaffine terms still exist. The adaptation auxiliary terms are skillfully designed to cancel the effect of the dead-zone input. Based on the Lyapunov difference theorem, the boundedness of all the signals in the closed-loop system can be ensured and the tracking errors are kept in a bounded compact set. The effectiveness of the proposed technique is checked by a simulation study.

  7. Optimum systems design with random input and output applied to solar water heating

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, L. L.

    1980-03-01

    Solar water heating systems are evaluated. Models were developed to estimate the percentage of energy supplied from the Sun to a household. Since solar water heating systems have random input and output queueing theory, birth and death processes were the major tools in developing the models of evaluation. Microeconomics methods help in determining the optimum size of the solar water heating system design parameters, i.e., the water tank volume and the collector area.

  8. Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.

    PubMed

    Singh, Raj Mohan

    2008-04-01

    Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging.

  9. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  10. QFT Multi-Input, Multi-Output Design with Non-Diagonal, Non-Square Compensation Matrices

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Henderson, D. K.

    1996-01-01

    A technique for obtaining a non-diagonal compensator for the control of a multi-input, multi-output plant is presented. The technique, which uses Quantitative Feedback Theory, provides guaranteed stability and performance robustness in the presence of parametric uncertainty. An example is given involving the lateral-directional control of an uncertain model of a high-performance fighter aircraft in which redundant control effectors are in evidence, i.e. more control effectors than output variables are used.

  11. Gradient-based controllers for timed continuous Petri nets

    NASA Astrophysics Data System (ADS)

    Lefebvre, Dimitri; Leclercq, Edouard; Druaux, Fabrice; Thomas, Philippe

    2015-07-01

    This paper is about control design for timed continuous Petri nets that are described as piecewise affine systems. In this context, the marking vector is considered as the state space vector, weighted marking of place subsets are defined as the model outputs and the model inputs correspond to multiplicative control actions that slow down the firing rate of some controllable transitions. Structural and functional sensitivity of the outputs with respect to the inputs are discussed in terms of Petri nets. Then, gradient-based controllers (GBC) are developed in order to adapt the control actions of the controllable transitions according to desired trajectories of the outputs.

  12. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  13. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  14. FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0

    USGS Publications Warehouse

    Durbin, Timothy J.; Bond, Linda D.

    1998-01-01

    This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.

  15. Life and reliability modeling of bevel gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Brikmanis, C. K.; Lewicki, D. G.; Coy, J. J.

    1985-01-01

    A reliability model is presented for bevel gear reductions with either a single input pinion or dual input pinions of equal size. The dual pinions may or may not have the same power applied for the analysis. The gears may be straddle mounted or supported in a bearing quill. The reliability model is based on the Weibull distribution. The reduction's basic dynamic capacity is defined as the output torque which may be applied for one million output rotations of the bevel gear with a 90 percent probability of reduction survival.

  16. Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2018-02-01

    This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Modeling the control of the central nervous system over the cardiovascular system using support vector machines.

    PubMed

    Díaz, José; Acosta, Jesús; González, Rafael; Cota, Juan; Sifuentes, Ernesto; Nebot, Àngela

    2018-02-01

    The control of the central nervous system (CNS) over the cardiovascular system (CS) has been modeled using different techniques, such as fuzzy inductive reasoning, genetic fuzzy systems, neural networks, and nonlinear autoregressive techniques; the results obtained so far have been significant, but not solid enough to describe the control response of the CNS over the CS. In this research, support vector machines (SVMs) are used to predict the response of a branch of the CNS, specifically, the one that controls an important part of the cardiovascular system. To do this, five models are developed to emulate the output response of five controllers for the same input signal, the carotid sinus blood pressure (CSBP). These controllers regulate parameters such as heart rate, myocardial contractility, peripheral and coronary resistance, and venous tone. The models are trained using a known set of input-output response in each controller; also, there is a set of six input-output signals for testing each proposed model. The input signals are processed using an all-pass filter, and the accuracy performance of the control models is evaluated using the percentage value of the normalized mean square error (MSE). Experimental results reveal that SVM models achieve a better estimation of the dynamical behavior of the CNS control compared to others modeling systems. The main results obtained show that the best case is for the peripheral resistance controller, with a MSE of 1.20e-4%, while the worst case is for the heart rate controller, with a MSE of 1.80e-3%. These novel models show a great reliability in fitting the output response of the CNS which can be used as an input to the hemodynamic system models in order to predict the behavior of the heart and blood vessels in response to blood pressure variations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Role of intraglomerular circuits in shaping temporally structured responses to naturalistic inhalation-driven sensory input to the olfactory bulb

    PubMed Central

    Carey, Ryan M.; Sherwood, William Erik; Shipley, Michael T.; Borisyuk, Alla

    2015-01-01

    Olfaction in mammals is a dynamic process driven by the inhalation of air through the nasal cavity. Inhalation determines the temporal structure of sensory neuron responses and shapes the neural dynamics underlying central olfactory processing. Inhalation-linked bursts of activity among olfactory bulb (OB) output neurons [mitral/tufted cells (MCs)] are temporally transformed relative to those of sensory neurons. We investigated how OB circuits shape inhalation-driven dynamics in MCs using a modeling approach that was highly constrained by experimental results. First, we constructed models of canonical OB circuits that included mono- and disynaptic feedforward excitation, recurrent inhibition and feedforward inhibition of the MC. We then used experimental data to drive inputs to the models and to tune parameters; inputs were derived from sensory neuron responses during natural odorant sampling (sniffing) in awake rats, and model output was compared with recordings of MC responses to odorants sampled with the same sniff waveforms. This approach allowed us to identify OB circuit features underlying the temporal transformation of sensory inputs into inhalation-linked patterns of MC spike output. We found that realistic input-output transformations can be achieved independently by multiple circuits, including feedforward inhibition with slow onset and decay kinetics and parallel feedforward MC excitation mediated by external tufted cells. We also found that recurrent and feedforward inhibition had differential impacts on MC firing rates and on inhalation-linked response dynamics. These results highlight the importance of investigating neural circuits in a naturalistic context and provide a framework for further explorations of signal processing by OB networks. PMID:25717156

  19. Real Time Trajectory Planning for Groups of Unmanned Vehicles

    DTIC Science & Technology

    2005-08-22

    positions of the robots, and a third set determines the local relations of the neighboring robots. They have used a nonholonomic kinematic model for the...Note that the vehicles are not physically coupled in any way. A feedback control law for control inputs F.2 and T2 miust be determined to control... Vb12 71 = 01 + 01 2 - 02 The goal is finding a control law for the two inputs [F2 , T2] to stabilize the outputs. Therefore, the input-output equations

  20. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  1. Input and output constraints-based stabilisation of switched nonlinear systems with unstable subsystems and its application

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Liu, Qian; Zhao, Jun

    2018-01-01

    This paper studies the problem of stabilisation of switched nonlinear systems with output and input constraints. We propose a recursive approach to solve this issue. None of the subsystems are assumed to be stablisable while the switched system is stabilised by dual design of controllers for subsystems and a switching law. When only dealing with bounded input, we provide nested switching controllers using an extended backstepping procedure. If both input and output constraints are taken into consideration, a Barrier Lyapunov Function is employed during operation to construct multiple Lyapunov functions for switched nonlinear system in the backstepping procedure. As a practical example, the control design of an equilibrium manifold expansion model of aero-engine is given to demonstrate the effectiveness of the proposed design method.

  2. The Economic Impact of Higher Education Institutions in Ireland: Evidence from Disaggregated Input-Output Tables

    ERIC Educational Resources Information Center

    Zhang, Qiantao; Larkin, Charles; Lucey, Brian M.

    2017-01-01

    While there has been a long history of modelling the economic impact of higher education institutions (HEIs), little research has been undertaken in the context of Ireland. This paper provides, for the first time, a disaggregated input-output table for Ireland's higher education sector. The picture painted overall is a higher education sector that…

  3. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  4. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  5. Local Sensitivity of Predicted CO 2 Injectivity and Plume Extent to Model Inputs for the FutureGen 2.0 site

    DOE PAGES

    Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...

    2014-12-31

    Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less

  6. Model input and output files for the simulation of time of arrival of landfill leachate at the water table, Municipal Solid Waste Landfill Facility, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso County, Texas

    USGS Publications Warehouse

    Abeyta, Cynthia G.; Frenzel, Peter F.

    1999-01-01

    This report contains listings of model input and output files for the simulation of the time of arrival of landfill leachate at the water table from the Municipal Solid Waste Landfill Facility (MSWLF), about 10 miles northeast of downtown El Paso, Texas. This simulation was done by the U.S. Geological Survey in cooperation with the U.S. Department of the Army, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso, Texas. The U.S. Environmental Protection Agency-developed Hydrologic Evaluation of Landfill Performance (HELP) and Multimedia Exposure Assessment (MULTIMED) computer models were used to simulate the production of leachate by a landfill and transport of landfill leachate to the water table. Model input data files used with and output files generated by the HELP and MULTIMED models are provided in ASCII format on a 3.5-inch 1.44-megabyte IBM-PC compatible floppy disk.

  7. Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model

    NASA Astrophysics Data System (ADS)

    Fu, Li Fang; Meng, Jun; Liu, Ying

    2015-12-01

    Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.

  8. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weimar, Mark R.; Daly, Don S.; Wood, Thomas W.

    Both nuclear power and nuclear weapons programs should have (related) economic signatures which are detectible at some scale. We evaluated this premise in a series of studies using national economic input/output (IO) data. Statistical discrimination models using economic IO tables predict with a high probability whether a country with an unknown predilection for nuclear weapons proliferation is in fact engaged in nuclear power development or nuclear weapons proliferation. We analyzed 93 IO tables, spanning the years 1993 to 2005 for 37 countries that are either members or associates of the Organization for Economic Cooperation and Development (OECD). The 2009 OECDmore » input/output tables featured 48 industrial sectors based on International Standard Industrial Classification (ISIC) Revision 3, and described the respective economies in current country-of-origin valued currency. We converted and transformed these reported values to US 2005 dollars using appropriate exchange rates and implicit price deflators, and addressed discrepancies in reported industrial sectors across tables. We then classified countries with Random Forest using either the adjusted or industry-normalized values. Random Forest, a classification tree technique, separates and categorizes countries using a very small, select subset of the 2304 individual cells in the IO table. A nation’s efforts in nuclear power, be it for electricity or nuclear weapons, are an enterprise with a large economic footprint -- an effort so large that it should discernibly perturb coarse country-level economics data such as that found in yearly input-output economic tables. The neoclassical economic input-output model describes a country’s or region’s economy in terms of the requirements of industries to produce the current level of economic output. An IO table row shows the distribution of an industry’s output to the industrial sectors while a table column shows the input required of each industrial sector by a given industry.« less

  10. Metamodels for Ozone: Comparison of Three Estimation Techniques

    EPA Science Inventory

    A metamodel for ozone is a mathematical relationship between the inputs and outputs of an air quality modeling experiment, permitting calculation of outputs for scenarios of interest without having to run the model again. In this study we compare three metamodel estimation techn...

  11. An Artificial Intelligence Approach for Modeling and Prediction of Water Diffusion Inside a Carbon Nanotube.

    PubMed

    Ahadian, Samad; Kawazoe, Yoshiyuki

    2009-06-04

    Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input-output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input-output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.

  12. A framework for linking cybersecurity metrics to the modeling of macroeconomic interdependencies.

    PubMed

    Santos, Joost R; Haimes, Yacov Y; Lian, Chenyang

    2007-10-01

    Hierarchical decision making is a multidimensional process involving management of multiple objectives (with associated metrics and tradeoffs in terms of costs, benefits, and risks), which span various levels of a large-scale system. The nation is a hierarchical system as it consists multiple classes of decisionmakers and stakeholders ranging from national policymakers to operators of specific critical infrastructure subsystems. Critical infrastructures (e.g., transportation, telecommunications, power, banking, etc.) are highly complex and interconnected. These interconnections take the form of flows of information, shared security, and physical flows of commodities, among others. In recent years, economic and infrastructure sectors have become increasingly dependent on networked information systems for efficient operations and timely delivery of products and services. In order to ensure the stability, sustainability, and operability of our critical economic and infrastructure sectors, it is imperative to understand their inherent physical and economic linkages, in addition to their cyber interdependencies. An interdependency model based on a transformation of the Leontief input-output (I-O) model can be used for modeling: (1) the steady-state economic effects triggered by a consumption shift in a given sector (or set of sectors); and (2) the resulting ripple effects to other sectors. The inoperability metric is calculated for each sector; this is achieved by converting the economic impact (typically in monetary units) into a percentage value relative to the size of the sector. Disruptive events such as terrorist attacks, natural disasters, and large-scale accidents have historically shown cascading effects on both consumption and production. Hence, a dynamic model extension is necessary to demonstrate the interplay between combined demand and supply effects. The result is a foundational framework for modeling cybersecurity scenarios for the oil and gas sector. A hypothetical case study examines a cyber attack that causes a 5-week shortfall in the crude oil supply in the Gulf Coast area.

  13. Control design methods for floating wind turbines for optimal disturbance rejection

    NASA Astrophysics Data System (ADS)

    Lemmer, Frank; Schlipf, David; Cheng, Po Wen

    2016-09-01

    An analysis of the floating wind turbine as a multi-input-multi-output system investigating the effect of the control inputs on the system outputs is shown. These effects are compared to the ones of the disturbances from wind and waves in order to give insights for the selection of the control layout. The frequencies with the largest impact on the outputs due to limited effect of the controlled variables are identified. Finally, an optimal controller is designed as a benchmark and compared to a conventional PI-controller using only the rotor speed as input. Here, the previously found system properties, especially the difficulties to damp responses to wave excitation, are confirmed and verified through a spectral analysis with realistic environmental conditions. This comparison also assesses the quality of the employed simplified linear simulation model compared to the nonlinear model and shows that such an efficient frequency-domain evaluation for control design is feasible.

  14. Precision absolute-value amplifier for a precision voltmeter

    DOEpatents

    Hearn, W.E.; Rondeau, D.J.

    1982-10-19

    Bipolar inputs are afforded by the plus inputs of first and second differential input amplifiers. A first gain determining resistor is connected between the minus inputs of the differential amplifiers. First and second diodes are connected between the respective minus inputs and the respective outputs of the differential amplifiers. First and second FETs have their gates connected to the outputs of the amplifiers, while their respective source and drain circuits are connected between the respective minus inputs and an output lead extending to a load resistor. The output current through the load resistor is proportional to the absolute value of the input voltage difference between the bipolar input terminals. A third differential amplifier has its plus input terminal connected to the load resistor. A second gain determining resistor is connected between the minus input of the third differential amplifier and a voltage source. A third FET has its gate connected to the output of the third amplifier. The source and drain circuit of the third transistor is connected between the minus input of the third amplifier and a voltage-frequency converter, constituting an output device. A polarity detector is also provided, comprising a pair of transistors having their inputs connected to the outputs of the first and second differential amplifiers. The outputs of the polarity detector are connected to gates which switch the output of the voltage-frequency converter between up and down counting outputs.

  15. Precision absolute value amplifier for a precision voltmeter

    DOEpatents

    Hearn, William E.; Rondeau, Donald J.

    1985-01-01

    Bipolar inputs are afforded by the plus inputs of first and second differential input amplifiers. A first gain determining resister is connected between the minus inputs of the differential amplifiers. First and second diodes are connected between the respective minus inputs and the respective outputs of the differential amplifiers. First and second FETs have their gates connected to the outputs of the amplifiers, while their respective source and drain circuits are connected between the respective minus inputs and an output lead extending to a load resister. The output current through the load resister is proportional to the absolute value of the input voltage difference between the bipolar input terminals. A third differential amplifier has its plus input terminal connected to the load resister. A second gain determining resister is connected between the minus input of the third differential amplifier and a voltage source. A third FET has its gate connected to the output of the third amplifier. The source and drain circuit of the third transistor is connected between the minus input of the third amplifier and a voltage-frequency converter, constituting an output device. A polarity detector is also provided, comprising a pair of transistors having their inputs connected to the outputs of the first and second differential amplifiers. The outputs of the polarity detector are connected to gates which switch the output of the voltage-frequency converter between up and down counting outputs.

  16. The Role of Agriculture in the Economic Development of West Virginia: An Input-Output Analysis. Miscellaneous Publication No. 20.

    ERIC Educational Resources Information Center

    D'Souza, Gerard E.; And Others

    This study deals with the structural interrelationships among agricultural sub-sectors, and between the agricultural and non-agricultural sectors of the West Virginia economy. The study is intended to offer information on which to base sound economic development decisions. An input-output economic model is used in order to focus on the interaction…

  17. Comprehensive School Reform Instructional Practices throughout a School Year: The Role of Subject Matter, Grade Level, and Time of Year

    ERIC Educational Resources Information Center

    Wiley, Caroline H.; Good, Thomas L.; McCaslin, Mary

    2008-01-01

    Background/Context: The achievement effects of Comprehensive School Reform (CSR) programs have been studied through the use of input-output models, in which type of CSR program is the input and student achievement is the output. Although specific programs have been found to be more effective and evaluated more than others, teaching practices in…

  18. Adaptive model reduction for continuous systems via recursive rational interpolation

    NASA Technical Reports Server (NTRS)

    Lilly, John H.

    1994-01-01

    A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.

  19. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Qing; Wang, Jiang; Yu, Haitao

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less

  20. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    NASA Astrophysics Data System (ADS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  1. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    PubMed Central

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  2. Associative memory model with spontaneous neural activity

    NASA Astrophysics Data System (ADS)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2012-05-01

    We propose a novel associative memory model wherein the neural activity without an input (i.e., spontaneous activity) is modified by an input to generate a target response that is memorized for recall upon the same input. Suitable design of synaptic connections enables the model to memorize input/output (I/O) mappings equaling 70% of the total number of neurons, where the evoked activity distinguishes a target pattern from others. Spontaneous neural activity without an input shows chaotic dynamics but keeps some similarity with evoked activities, as reported in recent experimental studies.

  3. Do Women with Inoperable Breast Cancer Have a Psychological Profile?

    ERIC Educational Resources Information Center

    Gilbar, Ora; Florian, Victor

    1991-01-01

    Compared psychological variables of 30 patients with inoperable breast cancer to matched group of 30 operable breast cancer patients. Found that women with inoperable breast cancer had higher scores in denial, exhaustion, hopelessness, worthlessness, depression, somatization, hostility, and psychoticism. Profile of inoperable cancer patients did…

  4. Bayesian History Matching of Complex Infectious Disease Models Using Emulation: A Tutorial and a Case Study on HIV in Uganda

    PubMed Central

    Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.

    2015-01-01

    Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850

  5. Classification

    NASA Astrophysics Data System (ADS)

    Oza, Nikunj

    2012-03-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. A set of training examples— examples with known output values—is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate’s measurements. The generalization performance of a learned model (how closely the target outputs and the model’s predicted outputs agree for patterns that have not been presented to the learning algorithm) would provide an indication of how well the model has learned the desired mapping. More formally, a classification learning algorithm L takes a training set T as its input. The training set consists of |T| examples or instances. It is assumed that there is a probability distribution D from which all training examples are drawn independently—that is, all the training examples are independently and identically distributed (i.i.d.). The ith training example is of the form (x_i, y_i), where x_i is a vector of values of several features and y_i represents the class to be predicted.* In the sunspot classification example given above, each training example would represent one sunspot’s classification (y_i) and the corresponding set of measurements (x_i). The output of a supervised learning algorithm is a model h that approximates the unknown mapping from the inputs to the outputs. In our example, h would map from the sunspot measurements to the type of sunspot. We may have a test set S—a set of examples not used in training that we use to test how well the model h predicts the outputs on new examples. Just as with the examples in T, the examples in S are assumed to be independent and identically distributed (i.i.d.) draws from the distribution D. We measure the error of h on the test set as the proportion of test cases that h misclassifies: 1/|S| Sigma(x,y union S)[I(h(x)!= y)] where I(v) is the indicator function—it returns 1 if v is true and 0 otherwise. In our sunspot classification example, we would identify additional examples of sunspots that were not used in generating the model, and use these to determine how accurate the model is—the fraction of the test samples that the model classifies correctly. An example of a classification model is the decision tree shown in Figure 23.1. We will discuss the decision tree learning algorithm in more detail later—for now, we assume that, given a training set with examples of sunspots, this decision tree is derived. This can be used to classify previously unseen examples of sunpots. For example, if a new sunspot’s inputs indicate that its "Group Length" is in the range 10-15, then the decision tree would classify the sunspot as being of type “E,” whereas if the "Group Length" is "NULL," the "Magnetic Type" is "bipolar," and the "Penumbra" is "rudimentary," then it would be classified as type "C." In this chapter, we will add to the above description of classification problems. We will discuss decision trees and several other classification models. In particular, we will discuss the learning algorithms that generate these classification models, how to use them to classify new examples, and the strengths and weaknesses of these models. We will end with pointers to further reading on classification methods applied to astronomy data.

  6. Active structural acoustic control of helicopter interior multifrequency noise using input-output-based hybrid control

    NASA Astrophysics Data System (ADS)

    Ma, Xunjun; Lu, Yang; Wang, Fengjiao

    2017-09-01

    This paper presents the recent advances in reduction of multifrequency noise inside helicopter cabin using an active structural acoustic control system, which is based on active gearbox struts technical approach. To attenuate the multifrequency gearbox vibrations and resulting noise, a new scheme of discrete model predictive sliding mode control has been proposed based on controlled auto-regressive moving average model. Its implementation only needs input/output data, hence a broader frequency range of controlled system is modelled and the burden on the state observer design is released. Furthermore, a new iteration form of the algorithm is designed, improving the developing efficiency and run speed. To verify the algorithm's effectiveness and self-adaptability, experiments of real-time active control are performed on a newly developed helicopter model system. The helicopter model can generate gear meshing vibration/noise similar to a real helicopter with specially designed gearbox and active struts. The algorithm's control abilities are sufficiently checked by single-input single-output and multiple-input multiple-output experiments via different feedback strategies progressively: (1) control gear meshing noise through attenuating vibrations at the key points on the transmission path, (2) directly control the gear meshing noise in the cabin using the actuators. Results confirm that the active control system is practical for cancelling multifrequency helicopter interior noise, which also weakens the frequency-modulation of the tones. For many cases, the attenuations of the measured noise exceed the level of 15 dB, with maximum reduction reaching 31 dB. Also, the control process is demonstrated to be smoother and faster.

  7. A reporting protocol for thermochronologic modeling illustrated with data from the Grand Canyon

    NASA Astrophysics Data System (ADS)

    Flowers, Rebecca M.; Farley, Kenneth A.; Ketcham, Richard A.

    2015-12-01

    Apatite (U-Th)/He and fission-track dates, as well as 4He/3He and fission-track length data, provide rich thermal history information. However, numerous choices and assumptions are required on the long road from raw data and observations to potentially complex geologic interpretations. This paper outlines a conceptual framework for this path, with the aim of promoting a broader understanding of how thermochronologic conclusions are derived. The tiered structure consists of thermal history model inputs at Level 1, thermal history model outputs at Level 2, and geologic interpretations at Level 3. Because inverse thermal history modeling is at the heart of converting thermochronologic data to interpretation, for others to evaluate and reproduce conclusions derived from thermochronologic results it is necessary to publish all data required for modeling, report all model inputs, and clearly and completely depict model outputs. Here we suggest a generalized template for a model input table with which to arrange, report and explain the choice of inputs to thermal history models. Model inputs include the thermochronologic data, additional geologic information, and system- and model-specific parameters. As an example we show how the origin of discrepant thermochronologic interpretations in the Grand Canyon can be better understood by using this disciplined approach.

  8. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  9. New multirate sampled-data control law structure and synthesis algorithm

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng

    1992-01-01

    A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.

  10. Interregional migration in an extended input-output model.

    PubMed

    Madden, M; Trigg, A B

    1990-01-01

    "This article develops a two-region version of an extended input-output model that disaggregates consumption among employed, unemployed, and inmigrant households, and which explicitly models the influx into a region of migrants to take up a proportion of any jobs created in the regional economy. The model is empirically tested using real data for the Scotland (UK) regions of Strathclyde and Rest-of-Scotland. Sets of interregional economic, demographic, demo-economic, and econo-demographic multipliers are developed and discussed, and the effects of a range of economic and demographic impacts are modeled. The circumstances under which Hawkins-Simon conditions for non-negativity are breached are identified, and the limits of the model discussed." excerpt

  11. Modeling of a resonant heat engine

    NASA Astrophysics Data System (ADS)

    Preetham, B. S.; Anderson, M.; Richards, C.

    2012-12-01

    A resonant heat engine in which the piston assembly is replaced by a sealed elastic cavity is modeled and analyzed. A nondimensional lumped-parameter model is derived and used to investigate the factors that control the performance of the engine. The thermal efficiency predicted by the model agrees with that predicted from the relation for the Otto cycle based on compression ratio. The predictions show that for a fixed mechanical load, increasing the heat input results in increased efficiency. The output power and power density are shown to depend on the loading for a given heat input. The loading condition for maximum output power is different from that required for maximum power density.

  12. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  13. Adaptive control using neural networks and approximate models.

    PubMed

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  14. Modal Parameter Identification of a Flexible Arm System

    NASA Technical Reports Server (NTRS)

    Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard

    1998-01-01

    In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.

  15. Inverter ratio failure detector

    NASA Technical Reports Server (NTRS)

    Wagner, A. P.; Ebersole, T. J.; Andrews, R. E. (Inventor)

    1974-01-01

    A failure detector which detects the failure of a dc to ac inverter is disclosed. The inverter under failureless conditions is characterized by a known linear relationship of its input and output voltages and by a known linear relationship of its input and output currents. The detector includes circuitry which is responsive to the detector's input and output voltages and which provides a failure-indicating signal only when the monitored output voltage is less by a selected factor, than the expected output voltage for the monitored input voltage, based on the known voltages' relationship. Similarly, the detector includes circuitry which is responsive to the input and output currents and provides a failure-indicating signal only when the input current exceeds by a selected factor the expected input current for the monitored output current based on the known currents' relationship.

  16. TWINTAN: A program for transonic wall interference assessment in two-dimensional wind tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, W. B., Jr.

    1980-01-01

    A method for assessing the wall interference in transonic two dimensional wind tunnel test was developed and implemented in a computer program. The method involves three successive solutions of the transonic small disturbance potential equation to define the wind tunnel flow, the perturbation attriburable to the model, and the equivalent free air flow around the model. Input includes pressure distributions on the model and along the top and bottom tunnel walls which are used as boundary conditions for the wind tunnel flow. The wall induced perturbation fields is determined as the difference between the perturbation in the tunnel flow solution and the perturbation attributable to the model. The methodology used in the program is described and detailed descriptions of the computer program input and output are presented. Input and output for a sample case are given.

  17. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  18. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    PubMed

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  19. Top-down methodology for human factors research

    NASA Technical Reports Server (NTRS)

    Sibert, J.

    1983-01-01

    User computer interaction as a conversation is discussed. The design of user interfaces which depends on viewing communications between a user and the computer as a conversion is presented. This conversation includes inputs to the computer (outputs from the user), outputs from the computer (inputs to the user), and the sequencing in both time and space of those outputs and inputs. The conversation is viewed from the user's side of the conversation. Two languages are modeled: the one with which the user communicates with the computer and the language where communication flows from the computer to the user. Both languages exist on three levels; the semantic, syntactic and lexical. It is suggested that natural languages can also be considered in these terms.

  20. Convolutional neural network for road extraction

    NASA Astrophysics Data System (ADS)

    Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong

    2017-11-01

    In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.

  1. Homeostasis, singularities, and networks.

    PubMed

    Golubitsky, Martin; Stewart, Ian

    2017-01-01

    Homeostasis occurs in a biological or chemical system when some output variable remains approximately constant as an input parameter [Formula: see text] varies over some interval. We discuss two main aspects of homeostasis, both related to the effect of coordinate changes on the input-output map. The first is a reformulation of homeostasis in the context of singularity theory, achieved by replacing 'approximately constant over an interval' by 'zero derivative of the output with respect to the input at a point'. Unfolding theory then classifies all small perturbations of the input-output function. In particular, the 'chair' singularity, which is especially important in applications, is discussed in detail. Its normal form and universal unfolding [Formula: see text] is derived and the region of approximate homeostasis is deduced. The results are motivated by data on thermoregulation in two species of opossum and the spiny rat. We give a formula for finding chair points in mathematical models by implicit differentiation and apply it to a model of lateral inhibition. The second asks when homeostasis is invariant under appropriate coordinate changes. This is false in general, but for network dynamics there is a natural class of coordinate changes: those that preserve the network structure. We characterize those nodes of a given network for which homeostasis is invariant under such changes. This characterization is determined combinatorially by the network topology.

  2. Stochastic multi-objective auto-optimization for resource allocation decision-making in fixed-input health systems.

    PubMed

    Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C

    2017-06-01

    The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.

  3. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  4. Input-output mapping reconstruction of spike trains at dorsal horn evoked by manual acupuncture

    NASA Astrophysics Data System (ADS)

    Wei, Xile; Shi, Dingtian; Yu, Haitao; Deng, Bin; Lu, Meili; Han, Chunxiao; Wang, Jiang

    2016-12-01

    In this study, a generalized linear model (GLM) is used to reconstruct mapping from acupuncture stimulation to spike trains driven by action potential data. The electrical signals are recorded in spinal dorsal horn after manual acupuncture (MA) manipulations with different frequencies being taken at the “Zusanli” point of experiment rats. Maximum-likelihood method is adopted to estimate the parameters of GLM and the quantified value of assumed model input. Through validating the accuracy of firings generated from the established GLM, it is found that the input-output mapping of spike trains evoked by acupuncture can be successfully reconstructed for different frequencies. Furthermore, via comparing the performance of several GLMs based on distinct inputs, it suggests that input with the form of half-sine with noise can well describe the generator potential induced by acupuncture mechanical action. Particularly, the comparison of reproducing the experiment spikes for five selected inputs is in accordance with the phenomenon found in Hudgkin-Huxley (H-H) model simulation, which indicates the mapping from half-sine with noise input to experiment spikes meets the real encoding scheme to some extent. These studies provide us a new insight into coding processes and information transfer of acupuncture.

  5. Overview of Graphical User Interface for ARRBOD (Acute Radiation Risk and BRYNTRN Organ Dose Projection)

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Hu, Shaowen; Nounu, Hatem N.; Cucinotta, Francis A.

    2010-01-01

    Solar particle events (SPEs) pose the risk of acute radiation sickness (ARS) to astronauts, because organ doses from large SPEs may reach critical levels during extra vehicular activities (EVAs) or lightly shielded spacecraft. NASA has developed an organ dose projection model of Baryon transport code (BRYNTRN) with an output data processing module of SUMDOSE, and a probabilistic model of acute radiation risk (ARR). BRYNTRN code operation requires extensive input preparation, and the risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. With a graphical user interface (GUI) to handle input and output for BRYNTRN, these response models can be connected easily and correctly to BRYNTRN in a user friendly way. The GUI for the Acute Radiation Risk and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations required for operations of the ARRBOD modules: BRYNTRN, SUMDOSE, and the ARR probabilistic response model. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. Assessment of astronauts organ doses and ARS from the exposure to historically large SPEs is in support of mission design and operation planning to avoid ARS and stay within the current NASA short-term dose limits. The ARRBOD GUI will serve as a proof-of-concept for future integration of other risk projection models for human space applications. We present an overview of the ARRBOD GUI product, which is a new self-contained product, for the major components of the overall system, subsystem interconnections, and external interfaces.

  6. Prediction model of sinoatrial node field potential using high order partial least squares.

    PubMed

    Feng, Yu; Cao, Hui; Zhang, Yanbin

    2015-01-01

    High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).

  7. Replacing Fortran Namelists with JSON

    NASA Astrophysics Data System (ADS)

    Robinson, T. E., Jr.

    2017-12-01

    Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.

  8. Principal Dynamic Mode Analysis of the Hodgkin–Huxley Equations

    PubMed Central

    Eikenberry, Steffen E.; Marmarelis, Vasilis Z.

    2015-01-01

    We develop an autoregressive model framework based on the concept of Principal Dynamic Modes (PDMs) for the process of action potential (AP) generation in the excitable neuronal membrane described by the Hodgkin–Huxley (H–H) equations. The model's exogenous input is injected current, and whenever the membrane potential output exceeds a specified threshold, it is fed back as a second input. The PDMs are estimated from the previously developed Nonlinear Autoregressive Volterra (NARV) model, and represent an efficient functional basis for Volterra kernel expansion. The PDM-based model admits a modular representation, consisting of the forward and feedback PDM bases as linear filterbanks for the exogenous and autoregressive inputs, respectively, whose outputs are then fed to a static nonlinearity composed of polynomials operating on the PDM outputs and cross-terms of pair-products of PDM outputs. A two-step procedure for model reduction is performed: first, influential subsets of the forward and feedback PDM bases are identified and selected as the reduced PDM bases. Second, the terms of the static nonlinearity are pruned. The first step reduces model complexity from a total of 65 coefficients to 27, while the second further reduces the model coefficients to only eight. It is demonstrated that the performance cost of model reduction in terms of out-of-sample prediction accuracy is minimal. Unlike the full model, the eight coefficient pruned model can be easily visualized to reveal the essential system components, and thus the data-derived PDM model can yield insight into the underlying system structure and function. PMID:25630480

  9. HAWQS Beta Flyer

    EPA Pesticide Factsheets

    HAWQS is a web-based interactive water quantity and quality modeling system that provides users with interactive web interfaces and maps; pre-loaded input data; outputs that include tables, charts, graphs and raw output data; and a user guide.

  10. HAWQS beta flyer

    EPA Pesticide Factsheets

    HAWQS is a web-based interactive water quantity and quality modeling system that provides users with interactive web interfaces and maps; pre-loaded input data; outputs that include tables, charts, graphs and raw output data; and a user guide.

  11. Life and dynamic capacity modeling for aircraft transmissions

    NASA Technical Reports Server (NTRS)

    Savage, Michael

    1991-01-01

    A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.

  12. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  13. USEEIO: a New and Transparent United States ...

    EPA Pesticide Factsheets

    National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts of policies, and performing streamlined life cycle assessment. USEEIO is a new environmentally extended input-output model of the United States fit for such purposes and other sustainable materials management applications. USEEIO melds data on economic transactions between 389 industry sectors with environmental data for these sectors covering land, water, energy and mineral usage and emissions of greenhouse gases, criteria air pollutants, nutrients and toxics, to build a life cycle model of 385 US goods and services. In comparison with existing US input-output models, USEEIO is more current with most data representing year 2013, more extensive in its coverage of resources and emissions, more deliberate and detailed in its interpretation and combination of data sources, and includes formal data quality evaluation and description. USEEIO was assembled with a new Python module called the IO Model Builder capable of assembling and calculating results of user-defined input-output models and exporting the models into LCA software. The model and data quality evaluation capabilities are demonstrated with an analysis of the environmental performance of an average hospital in the US. All USEEIO f

  14. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  15. Logic gate system with three outputs and three inputs based on switchable electrocatalysis of glucose by glucose oxidase entrapped in chitosan films.

    PubMed

    Liu, Shuang; Wang, Lei; Lian, Wenjing; Liu, Hongyun; Li, Chen-Zhong

    2015-01-01

    A logic-gate system with three outputs and three inputs was developed based on the bioelectrocatalysis of glucose by glucose oxidase (GOx) entrapped in chitosan films on the electrode surface by means of ferrocenedicarboxylic acid (Fc(COOH)2 ). Cyclic voltammetric (CV) signals of Fc(COOH)2 exhibited pH-triggered on/off behavior owing to electrostatic interactions between the film and the probe at different pH levels. The addition of glucose greatly increased the oxidation peak current (Ipa ) through the electrocatalytic reaction. pH and glucose were selected as two inputs. As a reversible inhibitor of GOx, Cu(2+) was chosen as the third input. The combination of three inputs led to Ipa with different values according to different mechanisms, which were defined as three outputs with two thresholds. The logic gate with three outputs by using one type of enzyme provided a novel model to build logic circuits based on biomacromolecules, which might be applied to the intelligent medical diagnostics as smart biosensors in the future. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.

  17. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  18. COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior

    NASA Technical Reports Server (NTRS)

    Smialek, James L.; Auping, Judith V.

    2002-01-01

    COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,

  19. Interval Predictor Models for Data with Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Lacerda, Marcio J.; Crespo, Luis G.

    2017-01-01

    An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.

  20. Calculating distributed glacier mass balance for the Swiss Alps from RCM output: Development and testing of downscaling and validation methods

    NASA Astrophysics Data System (ADS)

    Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.

    2009-04-01

    Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.

  1. Hierarchical resilience with lightweight threads.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, Kyle Bruce

    2011-10-01

    This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less

  2. Life cycle assessment modelling of waste-to-energy incineration in Spain and Portugal.

    PubMed

    Margallo, M; Aldaco, R; Irabien, A; Carrillo, V; Fischer, M; Bala, A; Fullana, P

    2014-06-01

    In recent years, waste management systems have been evaluated using a life cycle assessment (LCA) approach. A main shortcoming of prior studies was the focus on a mixture of waste with different characteristics. The estimation of emissions and consumptions associated with each waste fraction in these studies presented allocation problems. Waste-to-energy (WTE) incineration is a clear example in which municipal solid waste (MSW), comprising many types of materials, is processed to produce several outputs. This paper investigates an approach to better understand incineration processes in Spain and Portugal by applying a multi-input/output allocation model. The application of this model enabled predictions of WTE inputs and outputs, including the consumption of ancillary materials and combustibles, air emissions, solid wastes, and the energy produced during the combustion of each waste fraction. © The Author(s) 2014.

  3. Evaluating Multi-Input/Multi-Output Digital Control Systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Wieseman, Carol D.; Hoadley, Sherwood T.; Mukhopadhyay, Vivek

    1994-01-01

    Controller-performance-evaluation (CPE) methodology for multi-input/multi-output (MIMO) digital control systems developed. Procedures identify potentially destabilizing controllers and confirm satisfactory performance of stabilizing ones. Methodology generic and used in many types of multi-loop digital-controller applications, including digital flight-control systems, digitally controlled spacecraft structures, and actively controlled wind-tunnel models. Also applicable to other complex, highly dynamic digital controllers, such as those in high-performance robot systems.

  4. User's manual for master: Modeling of aerodynamic surfaces by 3-dimensional explicit representation. [input to three dimensional computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gibson, S. G.

    1983-01-01

    A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.

  5. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  6. Computing the modal mass from the state space model in combined experimental-operational modal analysis

    NASA Astrophysics Data System (ADS)

    Cara, Javier

    2016-05-01

    Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.

  7. Solid rocket booster thermal radiation model. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Lee, A. L.

    1976-01-01

    A user's manual was prepared for the computer program of a solid rocket booster (SRB) thermal radiation model. The following information was included: (1) structure of the program, (2) input information required, (3) examples of input cards and output printout, (4) program characteristics, and (5) program listing.

  8. Analysis performance of proton exchange membrane fuel cell (PEMFC)

    NASA Astrophysics Data System (ADS)

    Mubin, A. N. A.; Bahrom, M. H.; Azri, M.; Ibrahim, Z.; Rahim, N. A.; Raihan, S. R. S.

    2017-06-01

    Recently, the proton exchange membrane fuel cell (PEMFC) has gained much attention to the technology of renewable energy due to its mechanically ideal and zero emission power source. PEMFC performance reflects from the surroundings such as temperature and pressure. This paper presents an analysis of the performance of the PEMFC by developing the mathematical thermodynamic modelling using Matlab/Simulink. Apart from that, the differential equation of the thermodynamic model of the PEMFC is used to explain the contribution of heat to the performance of the output voltage of the PEMFC. On the other hand, the partial pressure equation of the hydrogen is included in the PEMFC mathematical modeling to study the PEMFC voltage behaviour related to the input variable input hydrogen pressure. The efficiency of the model is 33.8% which calculated by applying the energy conversion device equations on the thermal efficiency. PEMFC’s voltage output performance is increased by increasing the hydrogen input pressure and temperature.

  9. TWINTN4: A program for transonic four-wall interference assessment in two-dimensional wind tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, W. B., Jr.

    1984-01-01

    A method for assessing the wall interference in transonic two-dimensional wind tunnel tests including the effects of the tunnel sidewall boundary layer was developed and implemented in a computer program named TWINTN4. The method involves three successive solutions of the transonic small disturbance potential equation to define the wind tunnel flow, the equivalent free air flow around the model, and the perturbation attributable to the model. Required input includes pressure distributions on the model and along the top and bottom tunnel walls which are used as boundary conditions for the wind tunnel flow. The wall-induced perturbation field is determined as the difference between the perturbation in the tunnel flow solution and the perturbation attributable to the model. The methodology used in the program is described and detailed descriptions of the computer program input and output are presented. Input and output for a sample case are given.

  10. The impacts of final demand changes on total output of Indonesian ICT sectors: An analysis using input-output approach

    NASA Astrophysics Data System (ADS)

    Zuhdi, Ubaidillah

    2014-06-01

    The purpose of this study is to analyze the impacts of final demand changes on total output of Indonesian Information and Communication Technology (ICT) sectors. This study employs Input-Output (IO) analysis as a tool of analysis. More specifically, demand-pull IO quantity model is applied in order to achieve the objective. "Whole sector change" and "pure change" conditions are considered in this study. The results of calculation show that, in both conditions, the biggest positive impact to the total output of the sectors is given by the change of households consumption while the change of import has a negative impact. One of the recommendations suggested from this study is to construct import restriction policy for ICT products.

  11. Automated Knowledge Discovery From Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William

    2007-01-01

    A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.

  12. Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2015-04-01

    Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed, our goal is to characterize the model output uncertainty but too little information is available about input uncertainties. Hence, calibration of the input distributions with observation and a Bayesian approach seem necessary. This would probably involve methods such as MCMC which would be intractable without emulators.

  13. Propagation of economic shocks in input-output networks: A cross-country analysis

    NASA Astrophysics Data System (ADS)

    Contreras, Martha G. Alatriste; Fagiolo, Giorgio

    2014-12-01

    This paper investigates how economic shocks propagate and amplify through the input-output network connecting industrial sectors in developed economies. We study alternative models of diffusion on networks and we calibrate them using input-output data on real-world inter-sectoral dependencies for several European countries before the Great Depression. We show that the impact of economic shocks strongly depends on the nature of the shock and country size. Shocks that impact on final demand without changing production and the technological relationships between sectors have on average a large but very homogeneous impact on the economy. Conversely, when shocks change also the magnitudes of input-output across-sector interdependencies (and possibly sector production), the economy is subject to predominantly large but more heterogeneous avalanche sizes. In this case, we also find that (i) the more a sector is globally central in the country network, the larger its impact; (ii) the largest European countries, such as those constituting the core of the European Union's economy, typically experience the largest avalanches, signaling their intrinsic higher vulnerability to economic shocks.

  14. Identification of an internal combustion engine model by nonlinear multi-input multi-output system identification. Ph.D. Thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luh, G.C.

    1994-01-01

    This thesis presents the application of advanced modeling techniques to construct nonlinear forward and inverse models of internal combustion engines for the detection and isolation of incipient faults. The NARMAX (Nonlinear Auto-Regressive Moving Average modeling with eXogenous inputs) technique of system identification proposed by Leontaritis and Billings was used to derive the nonlinear model of a internal combustion engine, over operating conditions corresponding to the I/M240 cycle. The I/M240 cycle is a standard proposed by the United States Environmental Protection Agency to measure tailpipe emissions in inspection and maintenance programs and consists of a driving schedule developed for the purposemore » of testing compliance with federal vehicle emission standards for carbon monoxide, unburned hydrocarbons, and nitrogen oxides. The experimental work for model identification and validation was performed on a 3.0 liter V6 engine installed in an engine test cell at the Center for Automotive Research at The Ohio State University. In this thesis, different types of model structures were proposed to obtain multi-input multi-output (MIMO) nonlinear NARX models. A modification of the algorithm proposed by He and Asada was used to estimate the robust orders of the derived MIMO nonlinear models. A methodology for the analysis of inverse NARX model was developed. Two methods were proposed to derive the inverse NARX model: (1) inversion from the forward NARX model; and (2) direct identification of inverse model from the output-input data set. In this thesis, invertibility, minimum-phase characteristic of zero dynamics, and stability analysis of NARX forward model are also discussed. Stability in the sense of Lyapunov is also investigated to check the stability of the identified forward and inverse models. This application of inverse problem leads to the estimation of unknown inputs and to actuator fault diagnosis.« less

  15. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.

  16. Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions

    NASA Astrophysics Data System (ADS)

    Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.

    2017-12-01

    Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.

  17. Performance Analysis of Triple Asymmetrical Optical Micro Ring Resonator with 2 × 2 Input-Output Bus Waveguide

    NASA Astrophysics Data System (ADS)

    Ranjan, Suman; Mandal, Sanjoy

    2017-12-01

    Modeling of triple asymmetrical optical micro ring resonator (TAOMRR) in z-domain with 2 × 2 input-output system with detailed design of its waveguide configuration using finite-difference time-domain (FDTD) method is presented. Transfer function in z-domain using delay-line signal processing technique of the proposed TAOMRR is determined for different input and output ports. The frequency response analysis is carried out using MATLAB software. Group delay and dispersion characteristics are also determined in MATLAB. The electric field analysis is done using FDTD. The method proposes a new methodology to design and draw multiple configurations of coupled ring resonators having multiple in and out ports. Various important parameters such as coupling coefficients and FSR are also determined.

  18. Performance Analysis of Triple Asymmetrical Optical Micro Ring Resonator with 2 × 2 Input-Output Bus Waveguide

    NASA Astrophysics Data System (ADS)

    Ranjan, Suman; Mandal, Sanjoy

    2018-02-01

    Modeling of triple asymmetrical optical micro ring resonator (TAOMRR) in z-domain with 2 × 2 input-output system with detailed design of its waveguide configuration using finite-difference time-domain (FDTD) method is presented. Transfer function in z-domain using delay-line signal processing technique of the proposed TAOMRR is determined for different input and output ports. The frequency response analysis is carried out using MATLAB software. Group delay and dispersion characteristics are also determined in MATLAB. The electric field analysis is done using FDTD. The method proposes a new methodology to design and draw multiple configurations of coupled ring resonators having multiple in and out ports. Various important parameters such as coupling coefficients and FSR are also determined.

  19. Continuous-Time Bilinear System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2003-01-01

    The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.

  20. System and methods for reducing harmonic distortion in electrical converters

    DOEpatents

    Kajouke, Lateef A; Perisic, Milun; Ransom, Ray M

    2013-12-03

    Systems and methods are provided for delivering energy using an energy conversion module. An exemplary method for delivering energy from an input interface to an output interface using an energy converison module coupled between the input interface and the output interface comprises the steps of determining an input voltage reference for the input interface based on a desired output voltage and a measured voltage and the output interface, determining a duty cycle control value based on a ratio of the input voltage reference and the measured voltage, operating one or more switching elements of the energy conversion module to deliver energy from the input interface to the output interface to the output interface with a duty cycle influenced by the dute cycle control value.

  1. An efficient approach to ARMA modeling of biological systems with multiple inputs and delays

    NASA Technical Reports Server (NTRS)

    Perrott, M. H.; Cohen, R. J.

    1996-01-01

    This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.

  2. Fuzzy logic control and optimization system

    DOEpatents

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  3. A flatness-based control approach to drug infusion for cardiac function regulation

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Zervos, Nikolaos; Melkikh, Alexey

    2016-12-01

    A new control method based on differential flatness theory is developed in this article, aiming at solving the problem of regulation of haemodynamic parameters, Actually control of the cardiac output (volume of blood pumped out by heart per unit of time) and of the arterial blood pressure is achieved through the administered infusion of cardiovascular drugs, such as dopamine and sodium nitroprusside. Time delays between the control inputs and the system's outputs are taken into account. Using the principle of dynamic extension, which means that by considering certain control inputs and their derivatives as additional state variables, a state-space description for the heart's function is obtained. It is proven that the dynamic model of the heart is a differentially flat one. This enables its transformation into a linear canonical and decoupled form, for which the design of a stabilizing feedback controller becomes possible. The proposed feedback controller is of proven stability and assures fast and accurate tracking of the reference setpoints by the outputs of the heart's dynamic model. Moreover, by using a Kalman Filter-based disturbances' estimator, it becomes possible to estimate in real-time and compensate for the model uncertainty and external perturbation inputs that affect the heart's model.

  4. Investigation on the Efficiency of Financial Companies in Malaysia with Data Envelopment Analysis Model

    NASA Astrophysics Data System (ADS)

    Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam

    2018-04-01

    Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.

  5. Extension of the PC version of VEPFIT with input and output routines running under Windows

    NASA Astrophysics Data System (ADS)

    Schut, H.; van Veen, A.

    1995-01-01

    The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.

  6. Music Signal Processing Using Vector Product Neural Networks

    NASA Astrophysics Data System (ADS)

    Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.

    2017-05-01

    We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.

  7. A space transportation system operations model

    NASA Technical Reports Server (NTRS)

    Morris, W. Douglas; White, Nancy H.

    1987-01-01

    Presented is a description of a computer program which permits assessment of the operational support requirements of space transportation systems functioning in both a ground- and space-based environment. The scenario depicted provides for the delivery of payloads from Earth to a space station and beyond using upper stages based at the station. Model results are scenario dependent and rely on the input definitions of delivery requirements, task times, and available resources. Output is in terms of flight rate capabilities, resource requirements, and facility utilization. A general program description, program listing, input requirements, and sample output are included.

  8. State estimation of spatio-temporal phenomena

    NASA Astrophysics Data System (ADS)

    Yu, Dan

    This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm. The proposed algorithm outperforms the augmented two-stage Kalman Filter (ASKF) and the unbiased minimum-variance (UMV) algorithm are shown in several examples. Finally, we propose a framework to place multiple mobile sensors to optimize the long-term performance of KF in the estimation of the state of a PDE. The major challenges are that placing multiple sensors is an NP-hard problem, and the optimization problem is non-convex in general. In this dissertation, first, we construct an ROM using RPOD* algorithm, and then reduce the feasible sensor locations into a subset using the ROM. The Information Space Receding Horizon Control (I-RHC) approach and a modified Monte Carlo Tree Search (MCTS) approach are applied to solve the sensor scheduling problem using the subset. Various applications have been provided to demonstrate the performance of the proposed approach.

  9. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  10. Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less

  11. An Advanced Computational Approach to System of Systems Analysis & Architecting Using Agent-Based Behavioral Model

    DTIC Science & Technology

    2013-03-29

    Assessor that is in the SoS agent. Figure 31. Fuzzy Assessor for the SoS Agent for Assessment of SoS Architecture «subsystem» Fuzzy Rules « datatype ...Affordability « datatype » Flexibility « datatype » Performance « datatype » Robustness Input Input Input Input « datatype » Architecture QualityOutput Fuzzy

  12. The Lake Tahoe Basin Land Use Simulation Model

    USGS Publications Warehouse

    Forney, William M.; Oldham, I. Benson

    2011-01-01

    This U.S. Geological Survey Open-File Report describes the final modeling product for the Tahoe Decision Support System project for the Lake Tahoe Basin funded by the Southern Nevada Public Land Management Act and the U.S. Geological Survey's Geographic Analysis and Monitoring Program. This research was conducted by the U.S. Geological Survey Western Geographic Science Center. The purpose of this report is to describe the basic elements of the novel Lake Tahoe Basin Land Use Simulation Model, publish samples of the data inputs, basic outputs of the model, and the details of the Python code. The results of this report include a basic description of the Land Use Simulation Model, descriptions and summary statistics of model inputs, two figures showing the graphical user interface from the web-based tool, samples of the two input files, seven tables of basic output results from the web-based tool and descriptions of their parameters, and the fully functional Python code.

  13. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  14. A programmable power processor for high power space applications

    NASA Technical Reports Server (NTRS)

    Lanier, J. R., Jr.; Graves, J. R.; Kapustka, R. E.; Bush, J. R., Jr.

    1982-01-01

    A Programmable Power Processor (P3) has been developed for application in future large space power systems. The P3 is capable of operation over a wide range of input voltage (26 to 375 Vdc) and output voltage (24 to 180 Vdc). The peak output power capability is 18 kW (180 V at 100 A). The output characteristics of the P3 can be programmed to any voltage and/or current level within the limits of the processor and may be controlled as a function of internal or external parameters. Seven breadboard P3s and one 'flight-type' engineering model P3 have been built and tested both individually and in electrical power systems. The programmable feature allows the P3 to be used in a variety of applications by changing the output characteristics. Test results, including efficiency at various input/output combinations, transient response, and output impedance, are presented.

  15. The effect of output-input isolation on the scaling and energy consumption of all-spin logic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jiaxi; Haratipour, Nazila; Koester, Steven J., E-mail: skoester@umn.edu

    All-spin logic (ASL) is a novel approach for digital logic applications wherein spin is used as the state variable instead of charge. One of the challenges in realizing a practical ASL system is the need to ensure non-reciprocity, meaning the information flows from input to output, not vice versa. One approach described previously, is to introduce an asymmetric ground contact, and while this approach was shown to be effective, it remains unclear as to the optimal approach for achieving non-reciprocity in ASL. In this study, we quantitatively analyze techniques to achieve non-reciprocity in ASL devices, and we specifically compare themore » effect of using asymmetric ground position and dipole-coupled output/input isolation. For this analysis, we simulate the switching dynamics of multiple-stage logic devices with FePt and FePd perpendicular magnetic anisotropy materials using a combination of a matrix-based spin circuit model coupled to the Landau–Lifshitz–Gilbert equation. The dipole field is included in this model and can act as both a desirable means of coupling magnets and a source of noise. The dynamic energy consumption has been calculated for these schemes, as a function of input/output magnet separation, and the results show that using a scheme that electrically isolates logic stages produces superior non-reciprocity, thus allowing both improved scaling and reduced energy consumption.« less

  16. Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.

    PubMed

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.

  17. Input-Output Modeling for Urban Energy Consumption in Beijing: Dynamics and Comparison

    PubMed Central

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making. PMID:24595199

  18. Application of Artificial Neural Network to Optical Fluid Analyzer

    NASA Astrophysics Data System (ADS)

    Kimura, Makoto; Nishida, Katsuhiko

    1994-04-01

    A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).

  19. Development of Graphical User Interface for ARRBOD (Acute Radiation Risk and BRYNTRN Organ Dose Projection)

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Hu, Shaowen; Nounu, Hatem N.; Cucinotta, Francis A.

    2010-01-01

    The space radiation environment, particularly solar particle events (SPEs), poses the risk of acute radiation sickness (ARS) to humans; and organ doses from SPE exposure may reach critical levels during extra vehicular activities (EVAs) or within lightly shielded spacecraft. NASA has developed an organ dose projection model using the BRYNTRN with SUMDOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUMDOSE, written in FORTRAN, are a Baryon transport code and an output data processing code, respectively. The ARR code is written in C. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. BRYNTRN code operation requires extensive input preparation. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN in friendly way. A GUI for the Acute Radiation Risk and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules: BRYNTRN, SUMDOSE, and the ARR probabilistic response model. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. The ARRBOD GUI will serve as a proof-of-concept example for future integration of other human space applications risk projection models. The current version of the ARRBOD GUI is a new self-contained product and will have follow-on versions, as options are added: 1) human geometries of MAX/FAX in addition to CAM/CAF; 2) shielding distributions for spacecraft, Mars surface and atmosphere; 3) various space environmental and biophysical models; and 4) other response models to be connected to the BRYNTRN. The major components of the overall system, the subsystem interconnections, and external interfaces are described in this report; and the ARRBOD GUI product is explained step by step in order to serve as a tutorial.

  20. Structural equation modeling in environmental risk assessment.

    PubMed

    Buncher, C R; Succop, P A; Dietrich, K N

    1991-01-01

    Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.

  1. Trade Agreements: Impact on the U.S. Economy

    DTIC Science & Technology

    2009-11-10

    model is consistent with the Ricardian and Heckscher- Ohlin models . An important drawback of the model is that it can estimate only the aggregate...24 Now known as the Michigan Brown-Deardorff-Stern Model , the Michigan Model of World Production and Trade includes data on 29...economy in the model . Input- output accounts trace the flow of input commodities into the production processes of industries, the flow of intermediate

  2. Analyzing the impacts of final demand changes on total output using input-output approach: The case of Japanese ICT sectors

    NASA Astrophysics Data System (ADS)

    Zuhdi, Ubaidillah

    2014-03-01

    The purpose of this study is to analyze the impacts of final demand changes on total output of Japanese Information and Communication Technologies (ICT) sectors in future time. This study employs one of analysis tool in Input-Output (IO) analysis, demand-pull IO quantity model, in achieving the purpose. There are three final demand changes used in this study, namely (1) export, (2) import, and (3) outside households consumption changes. This study focuses on "pure change" condition, the condition that final demand changes only appear in analyzed sectors. The results show that export and outside households consumption modifications give positive impact while opposite impact could be seen in import change.

  3. A Model of Self-Organizing Head-Centered Visual Responses in Primate Parietal Areas

    PubMed Central

    Mender, Bedeho M. W.; Stringer, Simon M.

    2013-01-01

    We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization. PMID:24349064

  4. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. On-Chip Power-Combining for High-Power Schottky Diode Based Frequency Multipliers

    NASA Technical Reports Server (NTRS)

    Siles Perez, Jose Vicente (Inventor); Chattopadhyay, Goutam (Inventor); Lee, Choonsup (Inventor); Schlecht, Erich T. (Inventor); Jung-Kubiak, Cecile D. (Inventor); Mehdi, Imran (Inventor)

    2015-01-01

    A novel MMIC on-chip power-combined frequency multiplier device and a method of fabricating the same, comprising two or more multiplying structures integrated on a single chip, wherein each of the integrated multiplying structures are electrically identical and each of the multiplying structures include one input antenna (E-probe) for receiving an input signal in the millimeter-wave, submillimeter-wave or terahertz frequency range inputted on the chip, a stripline based input matching network electrically connecting the input antennas to two or more Schottky diodes in a balanced configuration, two or more Schottky diodes that are used as nonlinear semiconductor devices to generate harmonics out of the input signal and produce the multiplied output signal, stripline based output matching networks for transmitting the output signal from the Schottky diodes to an output antenna, and an output antenna (E-probe) for transmitting the output signal off the chip into the output waveguide transmission line.

  6. Community models for wildlife impact assessment: a review of concepts and approaches

    USGS Publications Warehouse

    Schroeder, Richard L.

    1987-01-01

    The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.

  7. Multi-mode horn

    NASA Technical Reports Server (NTRS)

    Neilson, Jeffrey M. (Inventor)

    2002-01-01

    A horn has an input aperture and an output aperture, and comprises a conductive inner surface formed by rotating a curve about a central axis. The curve comprises a first arc having an input aperture end and a transition end, and a second arc having a transition end and an output aperture end. When rotated about the central axis, the first arc input aperture end forms an input aperture, and the second arc output aperture end forms an output aperture. The curve is then optimized to provide a mode conversion which maximizes the power transfer of input energy to the Gaussian mode at the output aperture.

  8. The Impact of Input and Output Prices on The Household Economic Behavior of Rice-Livestock Integrated Farming System (Rlifs) and Non Rlifs Farmers

    NASA Astrophysics Data System (ADS)

    Lindawati, L.; Kusnadi, N.; Kuntjoro, S. U.; Swastika, D. K. S.

    2018-02-01

    Integrated farming system is a system that emphasized linkages and synergism of farming units waste utilization. The objective of the study was to analyze the impact of input and output prices on both Rice Livestock Integrated Farming System (RLIFS) and non RLIFS farmers. The study used econometric model in the form of a simultaneous equations system consisted of 36 equations (18 behavior and 18 identity equations). The impact of changes in some variables was obtained through simulation of input and output prices on simultaneous equations. The results showed that the price increasing of the seed, SP-36, urea, medication/vitamins, manure, bran, straw had negative impact on production of the rice, cow, manure, bran, straw and household income. The decrease in the rice and cow production, production input usage, allocation of family labor, rice and cow business income was greater in RLIFS than non RLIFS farmers. The impact of rising rice and cow cattle prices in the two groups of farmers was not too much different because (1) farming waste wasn’t used effectively (2) manure and straw had small proportion of production costs. The increase of input and output price didn’t have impact on production costs and household expenditures on RLIFS.

  9. A comparative study of linear and nonlinear MIMO feedback configurations

    NASA Technical Reports Server (NTRS)

    Desoer, C. A.; Lin, C. A.

    1984-01-01

    In this paper, a comparison is conducted of several feedback configurations which have appeared in the literature (e.g. unity-feedback, model-reference, etc.). The linear time-invariant multi-input multi-output case is considered. For each configuration, the stability conditions are specified, the relation between achievable I/O maps and the achievable disturbance-to-output maps is examined, and the effect of various subsystem perturbations on the system performance is studied. In terms of these considerations, it is demonstrated that one of the configurations considered is better than all the others. The results are then extended to the nonlinear multi-input multi-output case.

  10. MEMS Based Acoustic Array

    NASA Technical Reports Server (NTRS)

    Sheplak, Mark (Inventor); Nishida, Toshikaza (Inventor); Humphreys, William M. (Inventor); Arnold, David P. (Inventor)

    2006-01-01

    Embodiments of the present invention described and shown in the specification aid drawings include a combination responsive to an acoustic wave that can be utilized as a dynamic pressure sensor. In one embodiment of the present invention, the combination has a substrate having a first surface and an opposite second surface, a microphone positioned on the first surface of the substrate and having an input and a first output and a second output, wherein the input receives a biased voltage, and the microphone generates an output signal responsive to the acoustic wave between the first output and the second output. The combination further has an amplifier positioned on the first surface of the substrate and having a first input and a second input and an output, wherein the first input of the amplifier is electrically coupled to the first output of the microphone and the second input of the amplifier is electrically coupled to the second output of the microphone for receiving the output sinual from the microphone. The amplifier is spaced from the microphone with a separation smaller than 0.5 mm.

  11. Behavioral Implications of Piezoelectric Stack Actuators for Control of Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Celanovic, Nikola

    1996-01-01

    A lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and in particular for microrobotic applications requiring accurate position and/or force control. In addition to describing the input-output dynamic behavior, the proposed model explains aspects of non-intuitive behavioral phenomena evinced by piezoelectric actuators, such as the input-output rate-independent hysteresis and the change in mechanical stiffness that results from altering electrical load. The authors incorporate a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data.

  12. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    PubMed

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  13. Using Uncertainty and Sensitivity Analyses in Socioecological Agent-Based Models to Improve Their Analytical Performance and Policy Relevance

    PubMed Central

    Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764

  14. Economy-wide material input/output and dematerialization analysis of Jilin Province (China).

    PubMed

    Li, MingSheng; Zhang, HuiMin; Li, Zhi; Tong, LianJun

    2010-06-01

    In this paper, both direct material input (DMI) and domestic processed output (DPO) of Jilin Province in 1990-2006 were calculated and then based on these two indexes, a dematerialization model was established. The main results are summarized as follows: (1) both direct material input and domestic processed output increase at a steady rate during 1990-2006, with average annual growth rates of 4.19% and 2.77%, respectively. (2) The average contribution rate of material input to economic growth is 44%, indicating that the economic growth is visibly extensive. (3) During the studied period, accumulative quantity of material input dematerialization is 11,543 x 10(4) t and quantity of waste dematerialization is 5,987 x10(4) t. Moreover, dematerialization gaps are positive, suggesting that the potential of dematerialization has been well fulfilled. (4) In most years of the analyzed period, especially 2003-2006, the economic system of Jilin Province represents an unsustainable state. The accelerated economic growth relies mostly on excessive resources consumption after the Revitalization Strategy of Northeast China was launched.

  15. Assessment of effectiveness of geologic isolation systems. CIRMIS data system. Volume 3. Generator routines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.; Argo, R.S.

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. The various input parameters required in the analysis are compiled in data systems. The data are organized and preparedmore » by various input subroutines for utilization by the hydraulic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System, a storage and retrieval system for model input and output data, including graphical interpretation and display is described. This is the third of four volumes of the description of the CIRMIS Data System.« less

  16. Assessment of effectiveness of geologic isolation systems. CIRMIS data system. Volume 1. Initialization, operation, and documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.

    1980-01-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. The various input parameters required in the analysis are compiled in data systems. The data are organized and preparedmore » by various input subroutines for use by the hydrologic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System, a storage and retrieval system for model input and output data, including graphical interpretation and display is described. This is the first of four volumes of the description of the CIRMIS Data System.« less

  17. Space market model space industry input-output model

    NASA Technical Reports Server (NTRS)

    Hodgin, Robert F.; Marchesini, Roberto

    1987-01-01

    The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.

  18. Modeling the Afferent Dynamics of the Baroreflex Control System

    PubMed Central

    Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.

    2013-01-01

    In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231

  19. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell; Schifer, Nicholas

    2011-01-01

    Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.

  20. Fuzzy logic modeling of the resistivity parameter and topography features for aquifer assessment in hydrogeological investigation of a crystalline basement complex

    NASA Astrophysics Data System (ADS)

    Adabanija, M. A.; Omidiora, E. O.; Olayinka, A. I.

    2008-05-01

    A linguistic fuzzy logic system (LFLS)-based expert system model has been developed for the assessment of aquifers for the location of productive water boreholes in a crystalline basement complex. The model design employed a multiple input/single output (MISO) approach with geoelectrical parameters and topographic features as input variables and control crisp value as the output. The application of the method to the data acquired in Khondalitic terrain, a basement complex in Vizianagaram District, south India, shows that potential groundwater resource zones that have control output values in the range 0.3295-0.3484 have a yield greater than 6,000 liters per hour (LPH). The range 0.3174-0.3226 gives a yield less than 4,000 LPH. The validation of the control crisp value using data acquired from Oban Massif, a basement complex in southeastern Nigeria, indicates a yield less than 3,000 LPH for control output values in the range 0.2938-0.3065. This validation corroborates the ability of control output values to predict a yield, thereby vindicating the applicability of linguistic fuzzy logic system in siting productive water boreholes in a basement complex.

  1. Extension of the input-output relation for a Michelson interferometer to arbitrary coherent-state light sources: Gravitational-wave detector and weak-value amplification

    NASA Astrophysics Data System (ADS)

    Nakamura, Kouji; Fujimoto, Masa-Katsu

    2018-05-01

    An extension of the input-output relation for a conventional Michelson interferometric gravitational-wave detector is carried out to treat an arbitrary coherent state for the injected optical beam. This extension is one of necessary researches toward the clarification of the relation between conventional gravitational-wave detectors and a simple model of a gravitational-wave detector inspired by weak-measurements in Nishizawa (2015). The derived input-output relation describes not only a conventional Michelson-interferometric gravitational-wave detector but also the situation of weak measurements. As a result, we may say that a conventional Michelson gravitational-wave detector already includes the essence of the weak-value amplification as the reduction of the quantum noise from the light source through the measurement at the dark port.

  2. Input-output analysis and the hospital budgeting process.

    PubMed Central

    Cleverly, W O

    1975-01-01

    Two hospitals budget systems, a conventional budget and an input-output budget, are compared to determine how they affect management decisions in pricing, output, planning, and cost control. Analysis of data from a 210-bed not-for-profit hospital indicates that adoption of the input-output budget could cause substantial changes in posted hospital rates in individual departments but probably would have no impact on hospital output determination. The input-output approach promises to be a more accurate system for cost control and planning because, unlike the conventional approach, it generates objective signals for investigating variances of expenses from budgeted levels. PMID:1205865

  3. Self tuning control of wind-diesel power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mufti, M.D.; Balasubramanian, R.; Tripathy, S.C.

    1995-12-31

    This paper proposes some effective self-tuning control strategies for isolated Wind-Diesel power generation systems. Detailed modeling and studies on both single-input single-output (SISO) as well as multi-input multi-output (MIMO) self tuning regulators, applied to a typical system, are reported. Further, the effect of introducing a Super-conducting Magnetic Energy Storage (SMES) unit on the system performance has been investigated. The MIMO self-tuning regulator controlling the hybrid system and the SMES in a coordinated manner exhibits the best performance.

  4. Divvy Economies Based On (An Abstract) Temperature

    NASA Astrophysics Data System (ADS)

    Collins, Dennis G.

    2004-04-01

    The Leontief Input-Output economic system can provide a model for a one-parameter family of economic systems based on an abstract temperature T. In particular, given a normalized input-output matrix R and taking R= R(1), a family of economic systems R(1/T)=R(α) is developed that represents heating (T>1) and cooling (T<1) of the economy relative to T=1. .The economy for a given value of T represents the solution of a constrained maximum entropy problem.

  5. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    PubMed

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  6. Versatile current-mode universal biquadratic filter using DO-CCIIs

    NASA Astrophysics Data System (ADS)

    Chen, Hua-Pin

    2013-07-01

    In this article, a new three-input and three-output versatile current-mode universal biquadratic filter is proposed. The circuit employs three dual-output current conveyors (DO-CCIIs) as active elements together with three grounded resistors and two grounded capacitors. The proposed configuration exhibits low-input impedance and high-output impedance which is important for easy cascading in the current-mode operations. It can be used as either a single-input and three-output or three-input and two-output circuit. In the operation of single-input and three-output circuit, the lowpass, bandpass and bandreject can be realised simultaneously, while the highpass filtering response can be easily obtained by connecting appropriated output current directly without using addition stages. In the operation of three-input and two-output circuit, all five generic filtering functions can be easily realised by selecting different three input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no component matching conditions or inverting-type input current signals are imposed. All the passive and active sensitivities are low. HSPICE simulation results based on using TSMC 0.18 µm 1P6M CMOS process technology and supply voltages ±0.9 V to verify the theoretical analysis.

  7. RNA signal amplifier circuit with integrated fluorescence output.

    PubMed

    Akter, Farhima; Yokobayashi, Yohei

    2015-05-15

    We designed an in vitro signal amplification circuit that takes a short RNA input that catalytically activates the Spinach RNA aptamer to produce a fluorescent output. The circuit consists of three RNA strands: an internally blocked Spinach aptamer, a fuel strand, and an input strand (catalyst), as well as the Spinach aptamer ligand 3,5-difluoro-4-hydroxylbenzylidene imidazolinone (DFHBI). The input strand initially displaces the internal inhibitory strand to activate the fluorescent aptamer while exposing a toehold to which the fuel strand can bind to further displace and recycle the input strand. Under a favorable condition, one input strand was able to activate up to five molecules of the internally blocked Spinach aptamer in 185 min at 30 °C. The simple RNA circuit reported here serves as a model for catalytic activation of arbitrary RNA effectors by chemical triggers.

  8. Compact waveguide power divider with multiple isolated outputs

    DOEpatents

    Moeller, Charles P.

    1987-01-01

    A waveguide power divider (10) for splitting electromagnetic microwave power and directionally coupling the divided power includes an input waveguide (21) and reduced height output waveguides (23) interconnected by axial slots (22) and matched loads (25) and (26) positioned at the unused ends of input and output guides (21) and (23) respectively. The axial slots are of a length such that the wave in the input waveguide (21) is directionally coupled to the output waveguides (23). The widths of input guide (21) and output guides (23) are equal and the width of axial slots (22) is one half of the width of the input guide (21).

  9. Modeling and Optimization of Optical Half Adder in Two Dimensional Photonic Crystals

    NASA Astrophysics Data System (ADS)

    Sonth, Mahesh V.; Soma, Savita; Gowre, Sanjaykumar C.; Biradar, Nagashettappa

    2018-05-01

    The output of photonic integrated devices is enhanced using crystal waveguides and cavities but optimization of these devices is a topic of research. In this paper, optimization of the optical half adder in two-dimensional (2-D) linear photonic crystals using four symmetric T-shaped waveguides with 180° phase shift inputs is proposed. The input section of a T-waveguide acts as a beam splitter, and the output section acts as a power combiner. The constructive and destructive interference phenomenon will provide an output optical power. Output port Cout will receive in-phase power through the 180° phase shifter cavity designed near the junction. The optical half adder is modeled in a 2-D photonic crystal using the finite difference time domain method (FDTD). It consists of a cubic lattice with an array of 39 × 43 silicon rods of radius r 0.12 μm and 0.6 μm lattice constant a. The extinction ratio r e of 11.67 dB and 12.51 dB are achieved at output ports using the RSoft FullWAVE-6.1 software package.

  10. Highly fault-tolerant parallel computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, D.A.

    We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less

  11. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 2: Model equations and base aircraft data

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.

  12. Further observations on the relationship of EMG and muscle force

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Cecchini, L. R.; Gottlieb, G. L.

    1972-01-01

    Human skeletal muscle may be regarded as an electro-mechanical transducer. Its physiological input is a neural signal originating at the alpha motoneurons in the spinal cord and its output is force and muscle contraction, these both being dependent on the external load. Some experimental data taken during voluntary efforts around the ankle joint and by direct electrical stimulation of the nerve are described. Some of these experiments are simulated by an analog model, the input of which is recorded physiological soleus muscle EMG. The output is simulated foot torque. Limitations of a linear model and effect of some nonlinearities are discussed.

  13. Factors Contributing to Research Team Effectiveness: Testing a Model of Team Effectiveness in an Academic Setting

    ERIC Educational Resources Information Center

    Omar, Zoharah; Ahmad, Aminah

    2014-01-01

    Following the classic systems model of inputs, processes, and outputs, this study examined the influence of three input factors, team climate, work overload, and team leadership, on research project team effectiveness as measured by publication productivity, team member satisfaction, and job frustration. This study also examined the mediating…

  14. A Water-Withdrawal Input-Output Model of the Indian Economy.

    PubMed

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  15. Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise.

    PubMed

    Mankin, Romi; Rekker, Astrid

    2016-12-01

    The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.

  16. Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise

    NASA Astrophysics Data System (ADS)

    Mankin, Romi; Rekker, Astrid

    2016-12-01

    The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.

  17. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Non-blocking crossbar permutation engine with constant routing latency

    NASA Technical Reports Server (NTRS)

    Monacos, Steve P. (Inventor)

    1994-01-01

    The invention is embodied in an N x N crossbar for routing packets from a set of N input ports to a set of N output ports, each packet having a header identifying one of the output ports as its destination, including a plurality of individual links which carry individual packets. Each link has a link input end and a link output end, a plurality of switches. Each of the switches has at least top and bottom switch inputs connected to a corresponding pair of the link output ends and top and bottom switch outputs connected to a corresponding pair of link input ends, whereby each switch is connected to four different links. Each of the switches has an exchange state which routes packets from the top and bottom switch inputs to the bottom and top switch outputs, respectively, and a bypass state which routes packets from the top and bottom switch inputs to the top and bottom switch outputs, respectively. A plurality of individual controller devices governing respective switches for sensing from a header of a packet at each switch input for the identity of the destination output port of the packet and selecting one of the exchange and bypass states in accordance with the identity of the destination output port and with the location of the corresponding switch relative to the destination output port.

  19. Summary of the key features of seven biomathematical models of human fatigue and performance.

    PubMed

    Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F

    2004-03-01

    Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbély, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.

  20. Summary of the key features of seven biomathematical models of human fatigue and performance

    NASA Technical Reports Server (NTRS)

    Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.

    2004-01-01

    BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. CONCLUSIONS: Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbely, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.

  1. Correction of I/Q channel errors without calibration

    DOEpatents

    Doerry, Armin W.; Tise, Bertice L.

    2002-01-01

    A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.

  2. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  3. Technical change in forest sector models: the global forest products model approach

    Treesearch

    Joseph Buongiorno; Sushuai Zhu

    2015-01-01

    Technical change is developing rapidly in some parts of the forest sector, especially in the pulp and paper industry where wood fiber is being substituted by waste paper. In forest sector models, the processing of wood and other input into products is frequently represented by activity analysis (input–output). In this context, technical change translates in changes...

  4. Design of double fuzzy clustering-driven context neural networks.

    PubMed

    Kim, Eun-Hu; Oh, Sung-Kwun; Pedrycz, Witold

    2018-08-01

    In this study, we introduce a novel category of double fuzzy clustering-driven context neural networks (DFCCNNs). The study is focused on the development of advanced design methodologies for redesigning the structure of conventional fuzzy clustering-based neural networks. The conventional fuzzy clustering-based neural networks typically focus on dividing the input space into several local spaces (implied by clusters). In contrast, the proposed DFCCNNs take into account two distinct local spaces called context and cluster spaces, respectively. Cluster space refers to the local space positioned in the input space whereas context space concerns a local space formed in the output space. Through partitioning the output space into several local spaces, each context space is used as the desired (target) local output to construct local models. To complete this, the proposed network includes a new context layer for reasoning about context space in the output space. In this sense, Fuzzy C-Means (FCM) clustering is useful to form local spaces in both input and output spaces. The first one is used in order to form clusters and train weights positioned between the input and hidden layer, whereas the other one is applied to the output space to form context spaces. The key features of the proposed DFCCNNs can be enumerated as follows: (i) the parameters between the input layer and hidden layer are built through FCM clustering. The connections (weights) are specified as constant terms being in fact the centers of the clusters. The membership functions (represented through the partition matrix) produced by the FCM are used as activation functions located at the hidden layer of the "conventional" neural networks. (ii) Following the hidden layer, a context layer is formed to approximate the context space of the output variable and each node in context layer means individual local model. The outputs of the context layer are specified as a combination of both weights formed as linear function and the outputs of the hidden layer. The weights are updated using the least square estimation (LSE)-based method. (iii) At the output layer, the outputs of context layer are decoded to produce the corresponding numeric output. At this time, the weighted average is used and the weights are also adjusted with the use of the LSE scheme. From the viewpoint of performance improvement, the proposed design methodologies are discussed and experimented with the aid of benchmark machine learning datasets. Through the experiments, it is shown that the generalization abilities of the proposed DFCCNNs are better than those of the conventional FCNNs reported in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. The Role of Input and Output Tasks in Grammar Instruction: Theoretical, Empirical and Pedagogical Considerations

    ERIC Educational Resources Information Center

    Benati, Alessandro

    2017-01-01

    In this paper, a review of the role of input, output and instruction in second language acquisition is provided. Several pedagogical interventions in grammar instruction (e.g., processing instruction, input enhancement, structured output and collaborative output tasks) are presented and their effectiveness reviewed. A final and overall evaluation…

  6. A Comparative Study of the Proposed Models for the Components of the National Health Information System

    PubMed Central

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-01-01

    Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output. PMID:24825937

  7. A comparative study of the proposed models for the components of the national health information system.

    PubMed

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output.

  8. The NBS Energy Model Assessment project: Summary and overview

    NASA Astrophysics Data System (ADS)

    Gass, S. I.; Hoffman, K. L.; Jackson, R. H. F.; Joel, L. S.; Saunders, P. B.

    1980-09-01

    The activities and technical reports for the project are summarized. The reports cover: assessment of the documentation of Midterm Oil and Gas Supply Modeling System; analysis of the model methodology characteristics of the input and other supporting data; statistical procedures undergirding construction of the model and sensitivity of the outputs to variations in input, as well as guidelines and recommendations for the role of these in model building and developing procedures for their evaluation.

  9. Dendritic Na+ spikes enable cortical input to drive action potential output from hippocampal CA2 pyramidal neurons

    PubMed Central

    Sun, Qian; Srinivas, Kalyan V; Sotayo, Alaba; Siegelbaum, Steven A

    2014-01-01

    Synaptic inputs from different brain areas are often targeted to distinct regions of neuronal dendritic arbors. Inputs to proximal dendrites usually produce large somatic EPSPs that efficiently trigger action potential (AP) output, whereas inputs to distal dendrites are greatly attenuated and may largely modulate AP output. In contrast to most other cortical and hippocampal neurons, hippocampal CA2 pyramidal neurons show unusually strong excitation by their distal dendritic inputs from entorhinal cortex (EC). In this study, we demonstrate that the ability of these EC inputs to drive CA2 AP output requires the firing of local dendritic Na+ spikes. Furthermore, we find that CA2 dendritic geometry contributes to the efficient coupling of dendritic Na+ spikes to AP output. These results provide a striking example of how dendritic spikes enable direct cortical inputs to overcome unfavorable distal synaptic locale to trigger axonal AP output and thereby enable efficient cortico-hippocampal information flow. DOI: http://dx.doi.org/10.7554/eLife.04551.001 PMID:25390033

  10. Logic elements for reactor period meter

    DOEpatents

    McDowell, William P.; Bobis, James P.

    1976-01-01

    Logic elements are provided for a reactor period meter trip circuit. For one element, first and second inputs are applied to first and second chopper comparators, respectively. The output of each comparator is O if the input applied to it is greater than or equal to a trip level associated with each input and each output is a square wave of frequency f if the input applied to it is less than the associated trip level. The outputs of the comparators are algebraically summed and applied to a bandpass filter tuned to f. For another element, the output of each comparator is applied to a bandpass filter which is tuned to f to give a sine wave of frequency f. The outputs of the filters are multiplied by an analog multiplier whose output is 0 if either input is 0 and a sine wave of frequency 2f if both inputs are a frequency f.

  11. Fusion of spectral models for dynamic modeling of sEMG and skeletal muscle force.

    PubMed

    Potluri, Chandrasekhar; Anugolu, Madhavi; Chiu, Steve; Urfer, Alex; Schoen, Marco P; Naidu, D Subbaram

    2012-01-01

    In this paper, we present a method of combining spectral models using a Kullback Information Criterion (KIC) data fusion algorithm. Surface Electromyographic (sEMG) signals and their corresponding skeletal muscle force signals are acquired from three sensors and pre-processed using a Half-Gaussian filter and a Chebyshev Type- II filter, respectively. Spectral models - Spectral Analysis (SPA), Empirical Transfer Function Estimate (ETFE), Spectral Analysis with Frequency Dependent Resolution (SPFRD) - are extracted from sEMG signals as input and skeletal muscle force as output signal. These signals are then employed in a System Identification (SI) routine to establish the dynamic models relating the input and output. After the individual models are extracted, the models are fused by a probability based KIC fusion algorithm. The results show that the SPFRD spectral models perform better than SPA and ETFE models in modeling the frequency content of the sEMG/skeletal muscle force data.

  12. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  13. A Phase 1 Trial of an Immune Checkpoint Inhibitor plus Stereotactic Ablative Radiotherapy in Patients with Inoperable Stage I Non-Small Cell Lung Cancer

    DTIC Science & Technology

    2017-10-01

    with Inoperable Stage I Non-Small Cell Lung Cancer PRINCIPAL INVESTIGATOR: Karen Kelly, MD CONTRACTING ORGANIZATION: University of California...Inhibitor plus Stereotactic Ablative Radiotherapy in Patients with Inoperable Stage I Non-Small Cell Lung Cancer 5b. GRANT NUMBER W81XWH-15-2-0063...immune checkpoint inhibitor MPDL3280A (atezolizumab) in early stage inoperable non-small cell lung cancer . The trial is comprised of a traditional 3 + 3

  14. Impact of wait times on the effectiveness of transcatheter aortic valve replacement in severe aortic valve disease: a discrete event simulation model.

    PubMed

    Wijeysundera, Harindra C; Wong, William W L; Bennell, Maria C; Fremes, Stephen E; Radhakrishnan, Sam; Peterson, Mark; Ko, Dennis T

    2014-10-01

    There is increasing demand for transcatheter aortic valve replacement (TAVR) as the primary treatment option for patients with severe aortic stenosis who are high-risk surgical candidates or inoperable. We used mathematical simulation models to estimate the hypothetical effectiveness of TAVR with increasing wait times. We applied discrete event modelling, using data from the Placement of Aortic Transcatheter Valves (PARTNER) trials. We compared TAVR with medical therapy in the inoperable cohort, and compared TAVR to conventional aortic valve surgery in the high-risk cohort. One-year mortality and wait-time deaths were calculated in different scenarios by varying TAVR wait times from 10 days to 180 days, while maintaining a constant wait time for surgery at a mean of 15.6 days. In the inoperable cohort, the 1-year mortality for medical therapy was 50%. When the TAVR wait time was 10 days, the TAVR wait-time mortality was 1.9% with a 1-year mortality of 31.5%. TAVR wait-time deaths increased to 28.9% with a 180-day wait, with a 1-year mortality of 41.4%. In the high-risk cohort, the wait-time deaths and 1-year mortality for the surgical patients were 2.5% and 27%, respectively. The TAVR wait-time deaths increased from 2.2% with a 10-day wait to 22.4% with a 180-day wait, and a corresponding increase in 1-year mortality from 24.5% to 32.6%. Mortality with TAVR exceeded surgery when TAVR wait times exceeded 60 days. Modest increases in TAVR wait times have a substantial effect on the effectiveness of TAVR in inoperable patients and high-risk surgical candidates. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  15. Effective seat-to-head transmissibility in whole-body vibration: Effects of posture and arm position

    NASA Astrophysics Data System (ADS)

    Rahmatalla, Salam; DeShaw, Jonathan

    2011-12-01

    Seat-to-head transmissibility is a biomechanical measure that has been widely used for many decades to evaluate seat dynamics and human response to vibration. Traditionally, transmissibility has been used to correlate single-input or multiple-input with single-output motion; it has not been effectively used for multiple-input and multiple-output scenarios due to the complexity of dealing with the coupled motions caused by the cross-axis effect. This work presents a novel approach to use transmissibility effectively for single- and multiple-input and multiple-output whole-body vibrations. In this regard, the full transmissibility matrix is transformed into a single graph, such as those for single-input and single-output motions. Singular value decomposition and maximum distortion energy theory were used to achieve the latter goal. Seat-to-head transmissibility matrices for single-input/multiple-output in the fore-aft direction, single-input/multiple-output in the vertical direction, and multiple-input/multiple-output directions are investigated in this work. A total of ten subjects participated in this study. Discrete frequencies of 0.5-16 Hz were used for the fore-aft direction using supported and unsupported back postures. Random ride files from a dozer machine were used for the vertical and multiple-axis scenarios considering two arm postures: using the armrests or grasping the steering wheel. For single-input/multiple-output, the results showed that the proposed method was very effective in showing the frequencies where the transmissibility is mostly sensitive for the two sitting postures and two arm positions. For multiple-input/multiple-output, the results showed that the proposed effective transmissibility indicated higher values for the armrest-supported posture than for the steering-wheel-supported posture.

  16. Application of Interval Predictor Models to Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  17. Application of support vector machines for copper potential mapping in Kerman region, Iran

    NASA Astrophysics Data System (ADS)

    Shabankareh, Mahdi; Hezarkhani, Ardeshir

    2017-04-01

    The first step in systematic exploration studies is mineral potential mapping, which involves classification of the study area to favorable and unfavorable parts. Support vector machines (SVM) are designed for supervised classification based on statistical learning theory. This method named support vector classification (SVC). This paper describes SVC model, which combine exploration data in the regional-scale for copper potential mapping in Kerman copper bearing belt in south of Iran. Data layers or evidential maps were in six datasets namely lithology, tectonic, airborne geophysics, ferric alteration, hydroxide alteration and geochemistry. The SVC modeling result selected 2220 pixels as favorable zones, approximately 25 percent of the study area. Besides, 66 out of 86 copper indices, approximately 78.6% of all, were located in favorable zones. Other main goal of this study was to determine how each input affects favorable output. For this purpose, the histogram of each normalized input data to its favorable output was drawn. The histograms of each input dataset for favorable output showed that each information layer had a certain pattern. These patterns of SVC results could be considered as regional copper exploration characteristics.

  18. Impact of nonzero boresight pointing error on ergodic capacity of MIMO FSO communication systems.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2016-02-22

    A thorough investigation of the impact of nonzero boresight pointing errors on the ergodic capacity of multiple-input/multiple-output (MIMO) free-space optical (FSO) systems with equal gain combining (EGC) reception under different turbulence models, which are modeled as statistically independent, but not necessarily identically distributed (i.n.i.d.) is addressed in this paper. Novel closed-form asymptotic expressions at high signal-to-noise ratio (SNR) for the ergodic capacity of MIMO FSO systems are derived when different geometric arrangements of the receive apertures at the receiver are considered in order to reduce the effect of nonzero inherent boresight displacement, which is inevitably present when more than one receive aperture is considered. As a result, the asymptotic ergodic capacity of MIMO FSO systems is evaluated over log-normal (LN), gamma-gamma (GG) and exponentiated Weibull (EW) atmospheric turbulence in order to study different turbulence conditions, different sizes of receive apertures as well as different aperture averaging conditions. It is concluded that the use of single-input/multiple-output (SIMO) and MIMO techniques can significantly increase the ergodic capacity respect to the direct path link when the inherent boresight displacement takes small values, i.e. when the spacing among receive apertures is not too big. The effect of nonzero additional boresight errors, which is due to the thermal expansion of the building, is evaluated in multiple-input/single-output (MISO) and single-input/single-output (SISO) FSO systems. Simulation results are further included to confirm the analytical results.

  19. Acute toxicity of definitive chemoradiation in patients with inoperable or irresectable esophageal carcinoma

    PubMed Central

    2014-01-01

    Background Definitive chemoradiation (dCRT) is considered curative intent treatment for patients with inoperable or irresectable esophageal cancer. Acute toxicity data focussing on dCRT are lacking. Methods A retrospective analysis of patients treated with dCRT consisting of 6 cycles of paclitaxel 50 mg/m2 and carboplatin AUC2 concomitant with radiotherapy (50.4 Gy\\1.8Gy) from 2006 through 2011 at a single tertiary center was performed. Toxicity, hospital admissions and survival were analysed. Results 127 patients were treated with definitive chemoradiation. 33 patients were medically inoperable, 94 patients were irresectable, Despite of a significantly smaller tumor length in inoperable patients grade ≥3 toxicity was significantly recorded more often in the inoperable patients (44%) than in irresectable patients (20%) (p < 0.05) Hospital admission occurred more often in the inoperable patients (39%) than in the irresectable patients (22%) (p < 0.05) Median number of cycles of chemotherapy was five for inoperable patients (p = 0.01), while six cycles could be administered to patients with irresectable disease. Recurrence and survival were not significantly different. The odds ratio for developing toxicity ≥ grade 3 was 2.6 (95% CI 1.0-6.4 p < 0.05) for being an inoperable patient and 1.2 (95% CI 1.0-1.4 p = 0.02) per 10 extra micromol/l creatinine. Conclusions Our data show that acute toxicity of definitive chemoradiation is worse in patients with medically inoperable esophageal carcinoma compared to patients with irresectable esophageal cancer and mainly occurs in the 5th cycle of treatment. Improvement of supportive care should be undertaken in this more fragile group. PMID:24485047

  20. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  1. RCHILD - an R-package for flexible use of the landscape evolution model CHILD

    NASA Astrophysics Data System (ADS)

    Dietze, Michael

    2014-05-01

    Landscape evolution models provide powerful approaches to numerically assess earth surface processes, to quantify rates of landscape change, infer sediment transfer rates, estimate sediment budgets, investigate the consequences of changes in external drivers on a geomorphic system, to provide spatio-temporal interpolations between known landscape states or to test conceptual hypotheses. CHILD (Channel-Hillslope Integrated Landscape Development Model) is one of the most-used models of landscape change in the context of at least tectonic and geomorphologic process interactions. Running CHILD from command line and working with the model output can be a rather awkward task (static model control via text input file, only numeric output in text files). The package RCHILD is a collection of functions for the free statistical software R that help using CHILD in a flexible, dynamic and user-friendly way. The comprised functions allow creating maps, real-time scenes, animations and further thematic plots from model output. The model input files can be modified dynamically and, hence, (feedback-related) changes in external factors can be implemented iteratively. Output files can be written to common formats that can be readily imported to standard GIS software. This contribution presents the basic functionality of the model CHILD as visualised and modified by the package. A rough overview of the available functions is given. Application examples help to illustrate the great potential of numeric modelling of geomorphologic processes.

  2. The Comprehensible Output Hypothesis and Self-directed Learning: A Learner's Perspective.

    ERIC Educational Resources Information Center

    Liming, Yu

    1990-01-01

    Discusses the significance to language acquisition of pushing for comprehensible output. Three issues are examined: (1) comprehensible output and negative input, (2) comprehensible and incomprehensible output, and (3) comprehensible output and comprehensible input. (28 references) (GLR)

  3. 3D Visualization of Hydrological Model Outputs For a Better Understanding of Multi-Scale Phenomena

    NASA Astrophysics Data System (ADS)

    Richard, J.; Schertzer, D. J. M.; Tchiguirinskaia, I.

    2014-12-01

    During the last decades, many hydrological models has been created to simulate extreme events or scenarios on catchments. The classical outputs of these models are 2D maps, time series or graphs, which are easily understood by scientists, but not so much by many stakeholders, e.g. mayors or local authorities, and the general public. One goal of the Blue Green Dream project is to create outputs that are adequate for them. To reach this goal, we decided to convert most of the model outputs into a unique 3D visualization interface that combines all of them. This conversion has to be performed with an hydrological thinking to keep the information consistent with the context and the raw outputs.We focus our work on the conversion of the outputs of the Multi-Hydro (MH) model, which is physically based, fully distributed and with a GIS data interface. MH splits the urban water cycle into 4 components: the rainfall, the surface runoff, the infiltration and the drainage. To each of them, corresponds a modeling module with specific inputs and outputs. The superimposition of all this information will highlight the model outputs and help to verify the quality of the raw input data. For example, the spatial and the time variability of the rain generated by the rainfall module will be directly visible in 4D (3D + time) before running a full simulation. It is the same with the runoff module: because the result quality depends of the resolution of the rasterized land use, it will confirm or not the choice of the cell size.As most of the inputs and outputs are GIS files, two main conversions will be applied to display the results into 3D. First, a conversion from vector files to 3D objects. For example, buildings are defined in 2D inside a GIS vector file. Each polygon can be extruded with an height to create volumes. The principle is the same for the roads but an intrusion, instead of an extrusion, is done inside the topography file. The second main conversion is the raster conversion. Several files, such as the topography, the land use, the water depth, etc., are defined by geo-referenced grids. The corresponding grids are converted into a list of triangles to be displayed inside the 3D window. For the water depth, the display in pixels will not longer be the only solution. Creation of water contours will be done to more easily delineate the flood inside the catchment.

  4. MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method

    USGS Publications Warehouse

    Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.

    2003-01-01

    A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).

  5. Electrical Characterization of Hughes HCMP 1852D and RCA CDP1852D 8-bit, CMOS, I/O Ports

    NASA Technical Reports Server (NTRS)

    Stokes, R. L.

    1979-01-01

    Twenty-five Hughes HCMP 1852D and 25 RCA CDP1852D 8-bit, CMOS, I/O port microcircuits underwent electrical characterization tests. All electrical measurements were performed on a Tektronix S-3260 Test System. Before electrical testing, the devices were subjected to a 168-hour burn-in at 125 C with the inputs biased at 10V. Four of the Hughes parts became inoperable during testing. They exhibited functional failures and out-of-range parametric measurements after a few runs of the test program.

  6. Single lens laser beam shaper

    DOEpatents

    Liu, Chuyu [Newport News, VA; Zhang, Shukui [Yorktown, VA

    2011-10-04

    A single lens bullet-shaped laser beam shaper capable of redistributing an arbitrary beam profile into any desired output profile comprising a unitary lens comprising: a convex front input surface defining a focal point and a flat output portion at the focal point; and b) a cylindrical core portion having a flat input surface coincident with the flat output portion of the first input portion at the focal point and a convex rear output surface remote from the convex front input surface.

  7. Method and Apparatus for Reducing the Vulnerability of Latches to Single Event Upsets

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr. (Inventor)

    2002-01-01

    A delay circuit includes a first network having an input and an output node, a second network having an input and an output, the input of the second network being coupled to the output node of the first network. The first network and the second network are configured such that: a glitch at the input to the first network having a length of approximately one-half of a standard glitch time or less does not cause the voltage at the output of the second network to cross a threshold, a glitch at the input to the first network having a length of between approximately one-half and two standard glitch times causes the voltage at the output of the second network to cross the threshold for less than the length of the glitch, and a glitch at the input to the first network having a length of greater than approximately two standard glitch times causes the voltage at the output of the second network to cross the threshold for approximately the time of the glitch. The method reduces the vulnerability of a latch to single event upsets. The latch includes a gate having an input and an output and a feedback path from the output to the input of the gate. The method includes inserting a delay into the feedback path and providing a delay in the gate.

  8. Method and Apparatus for Reducing the Vulnerability of Latches to Single Event Upsets

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr. (Inventor)

    2002-01-01

    A delay circuit includes a first network having an input and an output node, a second network having an input and an output, the input of the second network being coupled to the output node of the first network. The first network and the second network are configured such that: a glitch at the input to the first network having a length of approximately one-half of a standard glitch time or less does not cause tile voltage at the output of the second network to cross a threshold, a glitch at the input to the first network having a length of between approximately one-half and two standard glitch times causes the voltage at the output of the second network to cross the threshold for less than the length of the glitch, and a glitch at the input to the first network having a length of greater than approximately two standard glitch times causes the voltage at the output of the second network to cross the threshold for approximately the time of the glitch. A method reduces the vulnerability of a latch to single event upsets. The latch includes a gate having an input and an output and a feedback path from the output to the input of the gate. The method includes inserting a delay into the feedback path and providing a delay in the gate.

  9. Adaptive optimal input design and parametric estimation of nonlinear dynamical systems: application to neuronal modeling.

    PubMed

    Madi, Mahmoud K; Karameh, Fadi N

    2018-05-11

    Many physical models of biological processes including neural systems are characterized by parametric nonlinear dynamical relations between driving inputs, internal states, and measured outputs of the process. Fitting such models using experimental data (data assimilation) is a challenging task since the physical process often operates in a noisy, possibly non-stationary environment; moreover, conducting multiple experiments under controlled and repeatable conditions can be impractical, time consuming or costly. The accuracy of model identification, therefore, is dictated principally by the quality and dynamic richness of collected data over single or few experimental sessions. Accordingly, it is highly desirable to design efficient experiments that, by exciting the physical process with smart inputs, yields fast convergence and increased accuracy of the model. We herein introduce an adaptive framework in which optimal input design is integrated with Square root Cubature Kalman Filters (OID-SCKF) to develop an online estimation procedure that first, converges significantly quicker, thereby permitting model fitting over shorter time windows, and second, enhances model accuracy when only few process outputs are accessible. The methodology is demonstrated on common nonlinear models and on a four-area neural mass model with noisy and limited measurements. Estimation quality (speed and accuracy) is benchmarked against high-performance SCKF-based methods that commonly employ dynamically rich informed inputs for accurate model identification. For all the tested models, simulated single-trial and ensemble averages showed that OID-SCKF exhibited (i) faster convergence of parameter estimates and (ii) lower dependence on inter-trial noise variability with gains up to around 1000 msec in speed and 81% increase in variability for the neural mass models. In terms of accuracy, OID-SCKF estimation was superior, and exhibited considerably less variability across experiments, in identifying model parameters of (a) systems with challenging model inversion dynamics and (b) systems with fewer measurable outputs that directly relate to the underlying processes. Fast and accurate identification therefore carries particular promise for modeling of transient (short-lived) neuronal network dynamics using a spatially under-sampled set of noisy measurements, as is commonly encountered in neural engineering applications. © 2018 IOP Publishing Ltd.

  10. Grey-box state-space identification of nonlinear mechanical vibrations

    NASA Astrophysics Data System (ADS)

    Noël, J. P.; Schoukens, J.

    2018-05-01

    The present paper deals with the identification of nonlinear mechanical vibrations. A grey-box, or semi-physical, nonlinear state-space representation is introduced, expressing the nonlinear basis functions using a limited number of measured output variables. This representation assumes that the observed nonlinearities are localised in physical space, which is a generic case in mechanics. A two-step identification procedure is derived for the grey-box model parameters, integrating nonlinear subspace initialisation and weighted least-squares optimisation. The complete procedure is applied to an electrical circuit mimicking the behaviour of a single-input, single-output (SISO) nonlinear mechanical system and to a single-input, multiple-output (SIMO) geometrically nonlinear beam structure.

  11. A combinatorial model for dentate gyrus sparse coding

    DOE PAGES

    Severa, William; Parekh, Ojas; James, Conrad D.; ...

    2016-12-29

    The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for twomore » notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Lastly, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.« less

  12. Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite

    NASA Astrophysics Data System (ADS)

    Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi

    2018-05-01

    LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.

  13. Evaluating digital libraries in the health sector. Part 1: measuring inputs and outputs.

    PubMed

    Cullen, Rowena

    2003-12-01

    This is the first part of a two-part paper which explores methods that can be used to evaluate digital libraries in the health sector. In this first part, some approaches to evaluation that have been proposed for mainstream digital information services are examined for their suitability to provide models for the health sector. The paper summarizes some major national and collaborative initiatives to develop measures for digital libraries, and analyses these approaches in terms of their relationship to traditional measures of library performance, which are focused on inputs and outputs, and their relevance to current debates among health information specialists. The second part* looks more specifically at evaluative models based on outcomes, and models being developed in the health sector.

  14. Optimal input selection for neural machine interfaces predicting multiple non-explicit outputs.

    PubMed

    Krepkovich, Eileen T; Perreault, Eric J

    2008-01-01

    This study implemented a novel algorithm that optimally selects inputs for neural machine interface (NMI) devices intended to control multiple outputs and evaluated its performance on systems lacking explicit output. NMIs often incorporate signals from multiple physiological sources and provide predictions for multidimensional control, leading to multiple-input multiple-output systems. Further, NMIs often are used with subjects who have motor disabilities and thus lack explicit motor outputs. Our algorithm was tested on simulated multiple-input multiple-output systems and on electromyogram and kinematic data collected from healthy subjects performing arm reaches. Effects of output noise in simulated systems indicated that the algorithm could be useful for systems with poor estimates of the output states, as is true for systems lacking explicit motor output. To test efficacy on physiological data, selection was performed using inputs from one subject and outputs from a different subject. Selection was effective for these cases, again indicating that this algorithm will be useful for predictions where there is no motor output, as often is the case for disabled subjects. Further, prediction results generalized for different movement types not used for estimation. These results demonstrate the efficacy of this algorithm for the development of neural machine interfaces.

  15. A Deterministic Interfacial Cyclic Oxidation Spalling Model. Part 1; Model Development and Parametric Response

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2002-01-01

    An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.

  16. Petri net modelling of buffers in automated manufacturing systems.

    PubMed

    Zhou, M; Dicesare, F

    1996-01-01

    This paper presents Petri net models of buffers and a methodology by which buffers can be included in a system without introducing deadlocks or overflows. The context is automated manufacturing. The buffers and models are classified as random order or order preserved (first-in-first-out or last-in-first-out), single-input-single-output or multiple-input-multiple-output, part type and/or space distinguishable or indistinguishable, and bounded or safe. Theoretical results for the development of Petri net models which include buffer modules are developed. This theory provides the conditions under which the system properties of boundedness, liveness, and reversibility are preserved. The results are illustrated through two manufacturing system examples: a multiple machine and multiple buffer production line and an automatic storage and retrieval system in the context of flexible manufacturing.

  17. High frequency inductive lamp and power oscillator

    DOEpatents

    Kirkpatrick, Douglas A.; Gitsevich, Aleksandr

    2005-09-27

    An oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and a tuning circuit connected to the input of the amplifier, wherein the tuning circuit is continuously variable and consists of solid state electrical components with no mechanically adjustable devices including a pair of diodes connected to each other at their respective cathodes with a control voltage connected at the junction of the diodes. Another oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and transmission lines connected to the input of the amplifier with an input pad and a perpendicular transmission line extending from the input pad and forming a leg of a resonant "T", and wherein the feedback network is coupled to the leg of the resonant "T".

  18. Homeostasis in a feed forward loop gene regulatory motif.

    PubMed

    Antoneli, Fernando; Golubitsky, Martin; Stewart, Ian

    2018-05-14

    The internal state of a cell is affected by inputs from the extra-cellular environment such as external temperature. If some output, such as the concentration of a target protein, remains approximately constant as inputs vary, the system exhibits homeostasis. Special sub-networks called motifs are unusually common in gene regulatory networks (GRNs), suggesting that they may have a significant biological function. Potentially, one such function is homeostasis. In support of this hypothesis, we show that the feed-forward loop GRN produces homeostasis. Here the inputs are subsumed into a single parameter that affects only the first node in the motif, and the output is the concentration of a target protein. The analysis uses the notion of infinitesimal homeostasis, which occurs when the input-output map has a critical point (zero derivative). In model equations such points can be located using implicit differentiation. If the second derivative of the input-output map also vanishes, the critical point is a chair: the output rises roughly linearly, then flattens out (the homeostasis region or plateau), and then starts to rise again. Chair points are a common cause of homeostasis. In more complicated equations or networks, numerical exploration would have to augment analysis. Thus, in terms of finding chairs, this paper presents a proof of concept. We apply this method to a standard family of differential equations modeling the feed-forward loop GRN, and deduce that chair points occur. This function determines the production of a particular mRNA and the resulting chair points are found analytically. The same method can potentially be used to find homeostasis regions in other GRNs. In the discussion and conclusion section, we also discuss why homeostasis in the motif may persist even when the rest of the network is taken into account. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Impact of input data uncertainty on environmental exposure assessment models: A case study for electromagnetic field modelling from mobile phone base stations.

    PubMed

    Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel

    2014-11-01

    With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  1. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  2. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  3. Versatile tunable current-mode universal biquadratic filter using MO-DVCCs and MOSFET-based electronic resistors.

    PubMed

    Chen, Hua-Pin

    2014-01-01

    This paper presents a versatile tunable current-mode universal biquadratic filter with four-input and three-output employing only two multioutput differential voltage current conveyors (MO-DVCCs), two grounded capacitors, and a well-known method for replacement of three grounded resistors by MOSFET-based electronic resistors. The proposed configuration exhibits high-output impedance which is important for easy cascading in the current-mode operations. The proposed circuit can be used as either a two-input three-output circuit or a three-input single-output circuit. In the operation of two-input three-output circuit, the bandpass, highpass, and bandreject filtering responses can be realized simultaneously while the allpass filtering response can be easily obtained by connecting appropriated output current directly without using additional stages. In the operation of three-input single-output circuit, all five generic filtering functions can be easily realized by selecting different three-input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no inverting-type input current signals are imposed. All the passive and active sensitivities are low. Postlayout simulations were carried out to verify the functionality of the design.

  4. Versatile Tunable Current-Mode Universal Biquadratic Filter Using MO-DVCCs and MOSFET-Based Electronic Resistors

    PubMed Central

    2014-01-01

    This paper presents a versatile tunable current-mode universal biquadratic filter with four-input and three-output employing only two multioutput differential voltage current conveyors (MO-DVCCs), two grounded capacitors, and a well-known method for replacement of three grounded resistors by MOSFET-based electronic resistors. The proposed configuration exhibits high-output impedance which is important for easy cascading in the current-mode operations. The proposed circuit can be used as either a two-input three-output circuit or a three-input single-output circuit. In the operation of two-input three-output circuit, the bandpass, highpass, and bandreject filtering responses can be realized simultaneously while the allpass filtering response can be easily obtained by connecting appropriated output current directly without using additional stages. In the operation of three-input single-output circuit, all five generic filtering functions can be easily realized by selecting different three-input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no inverting-type input current signals are imposed. All the passive and active sensitivities are low. Postlayout simulations were carried out to verify the functionality of the design. PMID:24982963

  5. A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments

    NASA Astrophysics Data System (ADS)

    Quigley, Patricia Allison

    Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.

  6. MIMS for TRIM

    EPA Pesticide Factsheets

    MIMS supports complex computational studies that use multiple interrelated models / programs, such as the modules within TRIM. MIMS is used by TRIM to run various models in sequence, while sharing input and output files.

  7. Numerical considerations in the development and implementation of constitutive models

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.; Imbrie, P. K.

    1985-01-01

    Several unified constitutive models were tested in uniaxial form by specifying input strain histories and comparing output stress histories. The purpose of the tests was to evaluate several time integration methods with regard to accuracy, stability, and computational economy. The sensitivity of the models to slight changes in input constants was also investigated. Results are presented for In100 at 1350 F and Hastelloy-X at 1800 F.

  8. SCI model structure determination program (OSR) user's guide. [optimal subset regression

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.

  9. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Using JEDI Data | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    tool; Purchase the necessary aggregated multiplier and consumer commodity demand data from someone skilled in input-output modeling (IMPLAN or another modeling tool); or Purchase the necessary aggregated

  11. Viral-genetic tracing of the input-output organization of a central noradrenaline circuit.

    PubMed

    Schwarz, Lindsay A; Miyamichi, Kazunari; Gao, Xiaojing J; Beier, Kevin T; Weissbourd, Brandon; DeLoach, Katherine E; Ren, Jing; Ibanes, Sandy; Malenka, Robert C; Kremer, Eric J; Luo, Liqun

    2015-08-06

    Deciphering how neural circuits are anatomically organized with regard to input and output is instrumental in understanding how the brain processes information. For example, locus coeruleus noradrenaline (also known as norepinephrine) (LC-NE) neurons receive input from and send output to broad regions of the brain and spinal cord, and regulate diverse functions including arousal, attention, mood and sensory gating. However, it is unclear how LC-NE neurons divide up their brain-wide projection patterns and whether different LC-NE neurons receive differential input. Here we developed a set of viral-genetic tools to quantitatively analyse the input-output relationship of neural circuits, and applied these tools to dissect the LC-NE circuit in mice. Rabies-virus-based input mapping indicated that LC-NE neurons receive convergent synaptic input from many regions previously identified as sending axons to the locus coeruleus, as well as from newly identified presynaptic partners, including cerebellar Purkinje cells. The 'tracing the relationship between input and output' method (or TRIO method) enables trans-synaptic input tracing from specific subsets of neurons based on their projection and cell type. We found that LC-NE neurons projecting to diverse output regions receive mostly similar input. Projection-based viral labelling revealed that LC-NE neurons projecting to one output region also project to all brain regions we examined. Thus, the LC-NE circuit overall integrates information from, and broadcasts to, many brain regions, consistent with its primary role in regulating brain states. At the same time, we uncovered several levels of specificity in certain LC-NE sub-circuits. These tools for mapping output architecture and input-output relationship are applicable to other neuronal circuits and organisms. More broadly, our viral-genetic approaches provide an efficient intersectional means to target neuronal populations based on cell type and projection pattern.

  12. Life cycle biological efficiency of mice divergently selected for heat loss.

    PubMed

    Bhatnagar, A S; Nielsen, M K

    2014-08-01

    Divergent selection in mice for heat loss was conducted in 3 independent replicates creating a high maintenance, high heat loss (MH) and low maintenance, low heat loss (ML) line and unselected control (MC). Improvement in feed efficiency was observed in ML mice due to a reduced maintenance energy requirement but there was also a slight decline in reproductive performance, survivability, and lean content, particularly when compared to MC animals. The objective of this study was to model a life cycle scenario similar to a livestock production system and calculate total inputs and outputs to estimate overall biological efficiency of these lines and determine if reduced feed intake resulted in improved life cycle efficiency. Feed intake, reproductive performance, growth, and body composition were recorded on 21 mating pairs from each line × replicate combination, cohabitated at 7 wk of age and maintained for up to 1 yr unless culled. Proportion of animals at each parity was calculated from survival rates estimated from previous research when enforcing a maximum of 4, 8, or 12 allowed parities. This parity distribution was then combined with values from previous studies to calculate inputs and outputs of mating pairs and offspring produced in a single cycle at equilibrium. Offspring output was defined as kilograms of lean output of offspring at 49 d. Offspring input was defined as megacalories of energy intake for growing offspring from 21 to 49 d. Parent output was defined as kilograms of lean output of culled parents. Parent input was defined as megacalories of energy intake for mating pairs from weaning of one parity to weaning of the next. Offspring output was greatest in MC mice due to superior BW and numbers weaned, while output was lowest in ML mice due to smaller litter sizes and lean content. Parent output did not differ substantially between lines but was greatest in MH mice due to poorer survival rates resulting in more culled animals. Input was greatest in MH and lowest for ML mice for both offspring and parent pairs, consistent with previous results in these lines. Life cycle efficiency was similar in MC and ML mice, while MH mice were least efficient. Ultimately, superior output in MC mice slightly outweighed the lower inputs in ML animals resulting from decreased maintenance energy requirements. Therefore, selection to reduce maintenance energy requirements may be more useful in terminal crosses or in a selection index to reduce possible negative effects on output, especially reproductive performance.

  13. Development of model reference adaptive control theory for electric power plant control applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mabius, L.E.

    1982-09-15

    The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less

  14. Systems and methods for compensating for electrical converter nonlinearities

    DOEpatents

    Perisic, Milun; Ransom, Ray M.; Kajouke, Lateef A.

    2013-06-18

    Systems and methods are provided for delivering energy from an input interface to an output interface. An electrical system includes an input interface, an output interface, an energy conversion module coupled between the input interface and the output interface, and a control module. The control module determines a duty cycle control value for operating the energy conversion module to produce a desired voltage at the output interface. The control module determines an input power error at the input interface and adjusts the duty cycle control value in a manner that is influenced by the input power error, resulting in a compensated duty cycle control value. The control module operates switching elements of the energy conversion module to deliver energy to the output interface with a duty cycle that is influenced by the compensated duty cycle control value.

  15. Dual Brushless Resolver Rate Sensor

    NASA Technical Reports Server (NTRS)

    Howard, David E. (Inventor)

    1997-01-01

    A resolver rate sensor is disclosed in which dual brushless resolvers are mechanically coupled to the same output shaft. Diverse inputs are provided to each resolver by providing the first resolver with a DC input and the second resolver with an AC sinusoidal input. A trigonometric identity in which the sum of the squares of the sin and cosine components equal one is used to advantage in providing a sensor of increased accuracy. The first resolver may have a fixed or variable DC input to permit dynamic adjustment of resolver sensitivity thus permitting a wide range of coverage. In one embodiment of the invention the outputs of the first resolver are directly inputted into two separate multipliers and the outputs of the second resolver are inputted into the two separate multipliers, after being demodulated in a pair of demodulator circuits. The multiplied signals are then added in an adder circuit to provide a directional sensitive output. In another embodiment the outputs from the first resolver is modulated in separate modulator circuits and the output from the modulator circuits are used to excite the second resolver. The outputs from the second resolver are demodulated in separate demodulator circuit and added in an adder circuit to provide a direction sensitive rate output.

  16. Decision & Management Tools for DNAPL Sites: Optimization of Chlorinated Solvent Source and Plume Remediation Considering Uncertainty

    DTIC Science & Technology

    2010-09-01

    differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from

  17. China Macroeconomic Model. Specification Report, User’s Guide, Scenario Report, Trade Data Sources Report.

    DTIC Science & Technology

    1984-11-01

    model, all linked to input-output flows from the relevant sector to all of the eleven sectors. They form an important analytical tool for gauging the...with very little in the way of intermediate inputs. However, most of coal is used up in intermediate inputs, as fuel for steel and other industrial...machine tools be developed even more rapidly as a domestic resource, rather than being imported? What about agricultural chemicals? China is unlikely

  18. Long period pseudo random number sequence generator

    NASA Technical Reports Server (NTRS)

    Wang, Charles C. (Inventor)

    1989-01-01

    A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).

  19. Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.

    PubMed

    Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun

    2017-10-01

    To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  1. Transported Geothermal Energy Technoeconomic Screening Tool - Calculation Engine

    DOE Data Explorer

    Liu, Xiaobing

    2016-09-21

    This calculation engine estimates technoeconomic feasibility for transported geothermal energy projects. The TGE screening tool (geotool.exe) takes input from input file (input.txt), and list results into output file (output.txt). Both the input and ouput files are in the same folder as the geotool.exe. To use the tool, the input file containing adequate information of the case should be prepared in the format explained below, and the input file should be put into the same folder as geotool.exe. Then the geotool.exe can be executed, which will generate a output.txt file in the same folder containing all key calculation results. The format and content of the output file is explained below as well.

  2. Time-domain model of gyroklystrons with diffraction power input and output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginzburg, N. S., E-mail: ginzburg@appl.sci-nnov.ru; Rozental, R. M.; Sergeev, A. S.

    A time-domain theory of gyroklystrons with diffraction input and output has been developed. The theory is based on the description of the wave excitation and propagation by a parabolic equation. The results of the simulations are in good agreement with the experimental studies of two-cavity gyroklystrons operating at the first and second cyclotron harmonics. Along with the basic characteristics of the amplification regimes, such as the gain and efficiency, the developed method makes it possible to define the conditions of spurious self-excitation and frequency-locking by an external signal.

  3. ACOSS Three (Active Control of Space Structures). Phase I.

    DTIC Science & Technology

    1980-05-01

    their assorted pitfalls, programs such as NASTRAN, SPAR, ASTRO , etc., are never-the-less the primary tools for generating dynamical models of...proofs and additional details, see Ref [*] Consider the system described in state-space form by: Dynamics: X = FX + Gu Sensors: y = HX = (F +GCH)X (1...input u and output y = Fx + Gu (6) y = Hx+Du (7) The input-output transfer function is given by y = (H(sI- F)-1G +D)u (8) or y(s) _ 1 N u(s) A(s) E

  4. Automatic insulation resistance testing apparatus

    DOEpatents

    Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.

    2005-06-14

    An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.

  5. Quantum theory of multiple-input-multiple-output Markovian feedback with diffusive measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chia, A.; Wiseman, H. M.

    2011-07-15

    Feedback control engineers have been interested in multiple-input-multiple-output (MIMO) extensions of single-input-single-output (SISO) results of various kinds due to its rich mathematical structure and practical applications. An outstanding problem in quantum feedback control is the extension of the SISO theory of Markovian feedback by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)] to multiple inputs and multiple outputs. Here we generalize the SISO homodyne-mediated feedback theory to allow for multiple inputs, multiple outputs, and arbitrary diffusive quantum measurements. We thus obtain a MIMO framework which resembles the SISO theory and whose additional mathematical structure is highlighted by the extensivemore » use of vector-operator algebra.« less

  6. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.

  7. Utilizing Physical Input-Output Model to Inform Nitrogen related Ecosystem Services

    EPA Science Inventory

    Here we describe the development of nitrogen PIOTs for the midwestern US state of Illinois with large inputs of nitrogen from agriculture and industry. The PIOTs are used to analyze the relationship between regional economic activities and ecosystem services in order to identify...

  8. Contingency Contractor Optimization Phase 3 Sustainment Database Design Document - Contingency Contractor Optimization Tool - Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frazier, Christopher Rawls; Durfee, Justin David; Bandlow, Alisa

    The Contingency Contractor Optimization Tool – Prototype (CCOT-P) database is used to store input and output data for the linear program model described in [1]. The database allows queries to retrieve this data and updating and inserting new input data.

  9. A Statistical Analysis of the Output Signals of an Acousto-Optic Spectrum Analyzer for CW (Continuous-Wave) Signals

    DTIC Science & Technology

    1988-10-01

    A statistical analysis on the output signals of an acousto - optic spectrum analyzer (AOSA) is performed for the case when the input signal is a...processing, Electronic warfare, Radar countermeasures, Acousto - optic , Spectrum analyzer, Statistical analysis, Detection, Estimation, Canada, Modelling.

  10. The Effectiveness of Structured Input and Structured Output on the Acquisition of Japanese Comparative Sentences

    ERIC Educational Resources Information Center

    Yamashita, Taichi; Iizuka, Takehiro

    2017-01-01

    Discussion of the roles of input and output has been attracting a number of researchers in second language acquisition (e.g., DeKeyser, 2007; Doughty, 1991; Krashen, 1982; Long, 1983; Norris & Ortega, 2000; Swain, 2000), and VanPatten (2004) advocated that both structured input and structured output allow learners to process input properly.…

  11. Experimental feedback linearisation of a vibrating system with a non-smooth nonlinearity

    NASA Astrophysics Data System (ADS)

    Lisitano, D.; Jiffri, S.; Bonisoli, E.; Mottershead, J. E.

    2018-03-01

    Input-output partial feedback linearisation is demonstrated experimentally for the first time on a system with non-smooth nonlinearity, a laboratory three degrees of freedom lumped mass system with a piecewise-linear spring. The output degree of freedom is located away from the nonlinearity so that the partial feedback linearisation possesses nonlinear internal dynamics. The dynamic behaviour of the linearised part is specified by eigenvalue assignment and an investigation of the zero dynamics is carried out to confirm stability of the overall system. A tuned numerical model is developed for use in the controller and to produce numerical outputs for comparison with experimental closed-loop results. A new limitation of the feedback linearisation method is discovered in the case of lumped mass systems - that the input and output must share the same degrees of freedom.

  12. Reproducibility and Transparency in Ocean-Climate Modeling

    NASA Astrophysics Data System (ADS)

    Hannah, N.; Adcroft, A.; Hallberg, R.; Griffies, S. M.

    2015-12-01

    Reproducibility is a cornerstone of the scientific method. Within geophysical modeling and simulation achieving reproducibility can be difficult, especially given the complexity of numerical codes, enormous and disparate data sets, and variety of supercomputing technology. We have made progress on this problem in the context of a large project - the development of new ocean and sea ice models, MOM6 and SIS2. Here we present useful techniques and experience.We use version control not only for code but the entire experiment working directory, including configuration (run-time parameters, component versions), input data and checksums on experiment output. This allows us to document when the solutions to experiments change, whether due to code updates or changes in input data. To avoid distributing large input datasets we provide the tools for generating these from the sources, rather than provide raw input data.Bugs can be a source of non-determinism and hence irreproducibility, e.g. reading from or branching on uninitialized memory. To expose these we routinely run system tests, using a memory debugger, multiple compilers and different machines. Additional confidence in the code comes from specialised tests, for example automated dimensional analysis and domain transformations. This has entailed adopting a code style where we deliberately restrict what a compiler can do when re-arranging mathematical expressions.In the spirit of open science, all development is in the public domain. This leads to a positive feedback, where increased transparency and reproducibility makes using the model easier for external collaborators, who in turn provide valuable contributions. To facilitate users installing and running the model we provide (version controlled) digital notebooks that illustrate and record analysis of output. This has the dual role of providing a gross, platform-independent, testing capability and a means to documents model output and analysis.

  13. Preliminary investigation of the effects of eruption source parameters on volcanic ash transport and dispersion modeling using HYSPLIT

    NASA Astrophysics Data System (ADS)

    Stunder, B.

    2009-12-01

    Atmospheric transport and dispersion (ATD) models are used in real-time at Volcanic Ash Advisory Centers to predict the location of airborne volcanic ash at a future time because of the hazardous nature of volcanic ash. Transport and dispersion models usually do not include eruption column physics, but start with an idealized eruption column. Eruption source parameters (ESP) input to the models typically include column top, eruption start time and duration, volcano latitude and longitude, ash particle size distribution, and total mass emission. An example based on the Okmok, Alaska, eruption of July 12-14, 2008, was used to qualitatively estimate the effect of various model inputs on transport and dispersion simulations using the NOAA HYSPLIT model. Variations included changing the ash column top and bottom, eruption start time and duration, particle size specifications, simulations with and without gravitational settling, and the effect of different meteorological model data. Graphical ATD model output of ash concentration from the various runs was qualitatively compared. Some parameters such as eruption duration and ash column depth had a large effect, while simulations using only small particles or changing the particle shape factor had much less of an effect. Some other variations such as using only large particles had a small effect for the first day or so after the eruption, then a larger effect on subsequent days. Example probabilistic output will be shown for an ensemble of dispersion model runs with various model inputs. Model output such as this may be useful as a means to account for some of the uncertainties in the model input. To improve volcanic ash ATD models, a reference database for volcanic eruptions is needed, covering many volcanoes. The database should include three major components: (1) eruption source, (2) ash observations, and (3) analyses meteorology. In addition, information on aggregation or other ash particle transformation processes would be useful.

  14. CIRMIS Data system. Volume 2. Program listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.

    1980-01-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for utilization by the hydraulic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required.The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the second of four volumes of the description of the CIRMIS Data System.« less

  15. Computer program for design analysis of radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1976-01-01

    A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.

  16. Validation of a new modal performance measure for flexible controllers design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simo, J.B.; Tahan, S.A.; Kamwa, I.

    1996-05-01

    A new modal performance measure for power system stabilizer (PSS) optimization is proposed in this paper. The new method is based on modifying the square envelopes of oscillating modes, in order to take into account their damping ratios while minimizing the performance index. This criteria is applied to flexible controllers optimal design, on a multi-input-multi-output (MIMO) reduced-order model of a prototype power system. The multivariable model includes four generators, each having one input and one output. Linear time-response simulation and transient stability analysis with a nonlinear package confirm the superiority of the proposed criteria and illustrate its effectiveness in decentralizedmore » control.« less

  17. Applying Input-Output Model to Estimate Broader Economic Impact of Transportation Infrastructure Investment

    NASA Astrophysics Data System (ADS)

    Anas, Ridwan; Tamin, Ofyar; Wibowo, Sony S.

    2016-09-01

    The purpose of this study is to identify the relationships between infrastructure improvement and economic growth in the surrounding region. Traditionally, microeconomic and macroeconomic analyses are the mostly used tools for analyzing the linkage between transportation sectors and economic growth but offer little clues to the mechanisms linking transport improvements and the broader economy impacts. This study will estimate the broader economic benefits of the new transportation infrastructure investment, Cipularangtollway in West Java province, Indonesia, to the region connected (Bandung district) using Input-Output model. The result show the decrease of freight transportation costs by at 17 % and the increase of 1.2 % of Bandung District's GDP after the operation of Cipularangtollway.

  18. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  19. EVALUATING THE USE OF OUTPUTS FROM COMPREHENSIVE METEOROLOGICAL MODELS IN AIR QUALITY MODELING APPLICATIONS

    EPA Science Inventory

    Currently used dispersion models, such as the AMS/EPA Regulatory Model (AERMOD), process routinely available meteorological observations to construct model inputs. Thus, model estimates of concentrations depend on the availability and quality of Meteorological observations, as we...

  20. Noise facilitates transcriptional control under dynamic inputs.

    PubMed

    Kellogg, Ryan A; Tay, Savaş

    2015-01-29

    Cells must respond sensitively to time-varying inputs in complex signaling environments. To understand how signaling networks process dynamic inputs into gene expression outputs and the role of noise in cellular information processing, we studied the immune pathway NF-κB under periodic cytokine inputs using microfluidic single-cell measurements and stochastic modeling. We find that NF-κB dynamics in fibroblasts synchronize with oscillating TNF signal and become entrained, leading to significantly increased NF-κB oscillation amplitude and mRNA output compared to non-entrained response. Simulations show that intrinsic biochemical noise in individual cells improves NF-κB oscillation and entrainment, whereas cell-to-cell variability in NF-κB natural frequency creates population robustness, together enabling entrainment over a wider range of dynamic inputs. This wide range is confirmed by experiments where entrained cells were measured under all input periods. These results indicate that synergy between oscillation and noise allows cells to achieve efficient gene expression in dynamically changing signaling environments. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Regional simulation of soil nitrogen dynamics and balance in Swiss cropping systems

    NASA Astrophysics Data System (ADS)

    Lee, Juhwan; Necpalova, Magdalena; Six, Johan

    2017-04-01

    We evaluated the regional-scale potential of various crop and soil management practices to reduce the dependency of crop N demand on external N inputs and N losses to the environment. The estimates of soil N balance were simulated and compared under alternative and conventional crop production across all Swiss cropland. Alternative practices were all combinations of organic fertilization, reduced tillage and winter cover cropping. Using the DayCent model, we simulated changes in crop N yields as well as the contribution of inputs and outputs to soil N balance by alternative practices, which was complemented with corresponding measurements from available long-term field experiments and site-level simulations. In addition, the effects of reducing (between 0% and 80% of recommended application rates) or increasing chemical fertilizer input rates (between 120% and 300% of recommended application rates) on system-level N dynamics were also simulated. Modeled yields at recommended N rates were only 37-87% of the maximum yield potential across common Swiss crops, and crop productivity were sensitive to the level of external N inputs, except for grass-clover mixture, soybean and peas. Overall, differences in soil N input and output decreased or increased proportionally with changing the amount of N input only from the recommended rate. As a result, there was no additional difference in soil N balance in response to N application rates. Nitrate leaching accounted for 40-81% of total N output differences, while up to 47% of total N output occurred through harvest and straw removal. Regardless of crops, yield potential became insensitive to high N rates. Differences in N2O and N2 emissions slightly increased with increasing N inputs, in which each gas was only responsible for about 1% of changes in total N output. Overall, there was a positive soil N balance under alternative practices. Particularly, considerable improvement in soil N balance was expected when slowly decomposed organic fertilizer was used in combination with cover cropping and/or reduced tillage. However, the increase in soil N balance was due to the decreases in harvested yield and nitrate leaching under these organic cropping based practices. Instead, the use of fast decomposed organic matter with cover cropping could be considered to avoid any yield penalty while decreasing nitrate leaching, hence reducing total N output. In order to effectively reduce N losses from soils, approaches to utilize multiple alternative options should be taken into account at the regional scale.

  2. Reconfigurable data path processor

    NASA Technical Reports Server (NTRS)

    Donohoe, Gregory (Inventor)

    2005-01-01

    A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.

  3. Feedback linearizing control of a MIMO power system

    NASA Astrophysics Data System (ADS)

    Ilyes, Laszlo

    Prior research has demonstrated that either the mechanical or electrical subsystem of a synchronous electric generator may be controlled using single-input single-output (SISO) nonlinear feedback linearization. This research suggests a new approach which applies nonlinear feedback linearization to a multi-input multi-output (MIMO) model of the synchronous electric generator connected to an infinite bus load model. In this way, the electrical and mechanical subsystems may be linearized and simultaneously decoupled through the introduction of a pair of auxiliary inputs. This allows well known, linear, SISO control methods to be effectively applied to the resulting systems. The derivation of the feedback linearizing control law is presented in detail, including a discussion on the use of symbolic math processing as a development tool. The linearizing and decoupling properties of the control law are validated through simulation. And finally, the robustness of the control law is demonstrated.

  4. Design of a data-driven predictive controller for start-up process of AMT vehicles.

    PubMed

    Lu, Xiaohui; Chen, Hong; Wang, Ping; Gao, Bingzhao

    2011-12-01

    In this paper, a data-driven predictive controller is designed for the start-up process of vehicles with automated manual transmissions (AMTs). It is obtained directly from the input-output data of a driveline simulation model constructed by the commercial software AMESim. In order to obtain offset-free control for the reference input, the predictor equation is gained with incremental inputs and outputs. Because of the physical characteristics, the input and output constraints are considered explicitly in the problem formulation. The contradictory requirements of less friction losses and less driveline shock are included in the objective function. The designed controller is tested under nominal conditions and changed conditions. The simulation results show that, during the start-up process, the AMT clutch with the proposed controller works very well, and the process meets the control objectives: fast clutch lockup time, small friction losses, and the preservation of driver comfort, i.e., smooth acceleration of the vehicle. At the same time, the closed-loop system has the ability to reject uncertainties, such as the vehicle mass and road grade.

  5. Heat suppression of the fiber coating on a cladding light stripper in high-power fiber laser.

    PubMed

    Yan, Ming-Jian; Wang, Zheng; Meng, Ling-Qiang; Yin, Lu; Han, Zhi-Gang; Shen, Hua; Wang, Hai-Lin; Zhu, Ri-Hong

    2018-01-20

    We present a theoretical model for the thermal effect of the fiber coating on a high-power cladding light stripper, which is fabricated by chemical etching. For the input and output of the fiber coating, a novel segmented corrosion method and increasing attenuation method are proposed for heat suppression, respectively. The relationship between the attenuation and temperature rise of the fiber coating at the output is experimentally demonstrated. The temperature distribution of the fiber coating at the input as well as the return light power caused by scattering are measured for the etched fiber with different surface roughness values. The results suggest that the rise in temperature is primarily caused by the scattering light propagating into the coating. Finally, an attenuation of 27 dB is achieved. At a room temperature of 23°C and input pump power of 438 W, the highest temperature of the input fiber coating decreases from 39.5°C to 27.9°C by segmented corrosion, and the temperature rise of the output fiber coating is close to 0.

  6. Comparing fixed and variable-width Gaussian networks.

    PubMed

    Kůrková, Věra; Kainen, Paul C

    2014-09-01

    The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Freight Transportation Energy Use : Appendix. Transportation Network Model Output.

    DOT National Transportation Integrated Search

    1978-07-01

    The overall design of the TSC Freight Energy Model is presented. A hierarchical modeling strategy is used, in which detailed modal simulators estimate the performance characteristics of transportation network elements, and the estimates are input to ...

  8. Output Control Using Feedforward And Cascade Controllers

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Report presents theoretical study of open-loop control elements in single-input, single-output linear system. Focus on output-control (servomechanism) problem, in which objective is to find control scheme that causes output to track certain command inputs and to reject certain disturbance inputs in steady state. Report closes with brief discussion of characteristics and relative merits of feedforward, cascade, and feedback controllers and combinations thereof.

  9. Model-Free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning

    NASA Astrophysics Data System (ADS)

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2017-04-01

    This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.

  10. Training feed-forward neural networks with gain constraints

    PubMed

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  11. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  12. Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, David O.

    2007-01-01

    A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less

  13. From water use to water scarcity footprinting in environmentally extended input-output analysis.

    PubMed

    Ridoutt, Bradley George; Hadjikakou, Michalis; Nolan, Martin; Bryan, Brett A

    2018-05-18

    Environmentally extended input-output analysis (EEIOA) supports environmental policy by quantifying how demand for goods and services leads to resource use and emissions across the economy. However, some types of resource use and emissions require spatially-explicit impact assessment for meaningful interpretation, which is not possible in conventional EEIOA. For example, water use in locations of scarcity and abundance is not environmentally equivalent. Opportunities for spatially-explicit impact assessment in conventional EEIOA are limited because official input-output tables tend to be produced at the scale of political units which are not usually well aligned with environmentally relevant spatial units. In this study, spatially-explicit water scarcity factors and a spatially disaggregated Australian water use account were used to develop water scarcity extensions that were coupled with a multi-regional input-output model (MRIO). The results link demand for agricultural commodities to the problem of water scarcity in Australia and globally. Important differences were observed between the water use and water scarcity footprint results, as well as the relative importance of direct and indirect water use, with significant implications for sustainable production and consumption-related policies. The approach presented here is suggested as a feasible general approach for incorporating spatially-explicit impact assessment in EEIOA.

  14. Analysis of urban metabolic processes based on input-output method: model development and a case study for Beijing

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Liu, Hong; Chen, Bin; Zheng, Hongmei; Li, Yating

    2014-06-01

    Discovering ways in which to increase the sustainability of the metabolic processes involved in urbanization has become an urgent task for urban design and management in China. As cities are analogous to living organisms, the disorders of their metabolic processes can be regarded as the cause of "urban disease". Therefore, identification of these causes through metabolic process analysis and ecological element distribution through the urban ecosystem's compartments will be helpful. By using Beijing as an example, we have compiled monetary input-output tables from 1997, 2000, 2002, 2005, and 2007 and calculated the intensities of the embodied ecological elements to compile the corresponding implied physical input-output tables. We then divided Beijing's economy into 32 compartments and analyzed the direct and indirect ecological intensities embodied in the flows of ecological elements through urban metabolic processes. Based on the combination of input-output tables and ecological network analysis, the description of multiple ecological elements transferred among Beijing's industrial compartments and their distribution has been refined. This hybrid approach can provide a more scientific basis for management of urban resource flows. In addition, the data obtained from distribution characteristics of ecological elements may provide a basic data platform for exploring the metabolic mechanism of Beijing.

  15. Measuring Changes in the Economics of Medical Practice.

    PubMed

    Fleming, Christopher; Rich, Eugene; DesRoches, Catherine; Reschovsky, James; Kogan, Rachel

    2015-08-01

    For the latter third of the twentieth century, researchers have estimated production and cost functions for physician practices. Today, those attempting to measure the inputs and outputs of physician practice must account for many recent changes in models of care delivery. In this paper, we review practice inputs and outputs as typically described in research on the economics of medical practice, and consider the implications of the changing organization of medical practice and nature of physician work. This evolving environment has created conceptual challenges in what are the appropriate measures of output from physician work, as well as what inputs should be measured. Likewise, the increasing complexity of physician practice organizations has introduced challenges to finding the appropriate data sources for measuring these constructs. Both these conceptual and data challenges pose measurement issues that must be overcome to study the economics of modern medical practice. Despite these challenges, there are several promising initiatives involving data sharing at the organizational level that could provide a starting point for developing the needed new data sources and metrics for physician inputs and outputs. However, additional efforts will be required to establish data collection approaches and measurements applicable to smaller and single specialty practices. Overcoming these measurement and data challenges will be key to supporting policy-relevant research on the changing economics of medical practice.

  16. Modeling and simulation of emergent behavior in transportation infrastructure restoration

    USGS Publications Warehouse

    Ojha, Akhilesh; Corns, Steven; Shoberg, Thomas G.; Qin, Ruwen; Long, Suzanna K.

    2018-01-01

    The objective of this chapter is to create a methodology to model the emergent behavior during a disruption in the transportation system and that calculates economic losses due to such a disruption, and to understand how an extreme event affects the road transportation network. The chapter discusses a system dynamics approach which is used to model the transportation road infrastructure system to evaluate the different factors that render road segments inoperable and calculate economic consequences of such inoperability. System dynamics models have been integrated with business process simulation model to evaluate, design, and optimize the business process. The chapter also explains how different factors affect the road capacity. After identifying the various factors affecting the available road capacity, a causal loop diagram (CLD) is created to visually represent the causes leading to a change in the available road capacity and the effects on travel costs when the available road capacity changes.

  17. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  18. Gating-signal propagation by a feed-forward neural motif

    NASA Astrophysics Data System (ADS)

    Liang, Xiaoming; Yanchuk, Serhiy; Zhao, Liang

    2013-07-01

    We study the signal propagation in a feed-forward motif consisting of three bistable neurons: Two input neurons receive input signals and the third output neuron generates the output. We find that a weak input signal can be propagated from the input neurons to the output neuron without amplitude attenuation. We further reveal that the initial states of the input neurons and the coupling strength act as signal gates and determine whether the propagation is enhanced or not. We also investigate the effect of the input signal frequency on enhanced signal propagation.

  19. Balancing the health workforce: breaking down overall technical change into factor technical change for labour-an empirical application to the Dutch hospital industry.

    PubMed

    Blank, Jos L T; van Hulst, Bart L

    2017-02-17

    Well-trained, well-distributed and productive health workers are crucial for access to high-quality, cost-effective healthcare. Because neither a shortage nor a surplus of health workers is wanted, policymakers use workforce planning models to get information on future labour markets and adjust policies accordingly. A neglected topic of workforce planning models is productivity growth, which has an effect on future demand for labour. However, calculating productivity growth for specific types of input is not as straightforward as it seems. This study shows how to calculate factor technical change (FTC) for specific types of input. The paper first theoretically derives FTCs from technical change in a consistent manner. FTC differs from a ratio of output and input, in that it deals with the multi-input, multi-output character of the production process in the health sector. Furthermore, it takes into account substitution effects between different inputs. An application of the calculation of FTCs is given for the Dutch hospital industry for the period 2003-2011. A translog cost function is estimated and used to calculate technical change and FTC for individual inputs, especially specific labour inputs. The results show that technical change increased by 2.8% per year in Dutch hospitals during 2003-2011. FTC differs amongst the various inputs. The FTC of nursing personnel increased by 3.2% per year, implying that fewer nurses were needed to let demand meet supply on the labour market. Sensitivity analyses show consistent results for the FTC of nurses. Productivity growth, especially of individual outputs, is a neglected topic in workforce planning models. FTC is a productivity measure that is consistent with technical change and accounts for substitution effects. An application to the Dutch hospital industry shows that the FTC of nursing personnel outpaced technical change during 2003-2011. The optimal input mix changed, resulting in fewer nurses being needed to let demand meet supply on the labour market. Policymakers should consider using more detailed and specific data on the nature of technical change when forecasting the future demand for health workers.

  20. Assessment of effectiveness of geologic isolation systems. CIRMIS data system. Volume 4. Driller's logs, stratigraphic cross section and utility routines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.

    1980-01-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for use by the hydrologic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the fourth of four volumes of the description of the CIRMIS Data System.« less

  1. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  2. Using the Global Forest Products Model (GFPM version 2012)

    Treesearch

    Joseph Buongiorno; Shushuai Zhu

    2012-01-01

    The purpose of this manual is to enable users of the Global Forest Products Model to: • Install and run the GFPM software • Understand the input data • Change the input data to explore different scenarios • Interpret the output The GFPM is an economic model of global production, consumption and trade of forest products (Buongiorno et al. 2003). The GFPM2012 has data...

  3. Evaluating the economic damages of transport disruptions using a transnational and interregional input-output model for Japan, China, and South Korea

    NASA Astrophysics Data System (ADS)

    Irimoto, Hiroshi; Shibusawa, Hiroyuki; Miyata, Yuzuru

    2017-10-01

    Damage to transportation networks as a result of natural disasters can lead to economic losses due to lost trade along those links in addition to the costs of damage to the infrastructure itself. This study evaluates the economic damages of transport disruptions such as highways, tunnels, bridges, and ports using a transnational and interregional Input-Output Model that divides the world into 23 regions: 9 regions in Japan, 7 regions in China, and 4 regions in Korea, Taiwan, ASEAN5, and the USA to allow us to focus on Japan's regional and international links. In our simulation, economic ripple effects of both international and interregional transport disruptions are measured by changes in the trade coefficients in the input-output model. The simulation showed that, in the case of regional links in Japan, a transport disruption in the Kanmon Straits causes the most damage to our targeted world, resulting in economic damage of approximately 36.3 billion. In the case of international links among Japan, China, and Korea, damage to the link between Kanto in Japan and Huabei in China causes economic losses of approximately 31.1 billion. Our result highlights the importance of disaster prevention in the Kanmon Straits, Kanto, and Huabei to help ensure economic resilience.

  4. Multiregional input-output model for the evaluation of Spanish water flows.

    PubMed

    Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio

    2013-01-01

    We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.

  5. A framework for evaluating forest restoration alternatives and their outcomes, over time, to inform monitoring: Bioregional inventory originated simulation under management

    Treesearch

    Jeremy S. Fried; Theresa B. Jain; Sara Loreno; Robert F. Keefe; Conor K. Bell

    2017-01-01

    The BioSum modeling framework summarizes current and prospective future forest conditions under alternative management regimes along with their costs, revenues and product yields. BioSum translates Forest Inventory and Analysis (FIA) data for input to the Forest Vegetation Simulator (FVS), summarizes FVS outputs for input to the treatment operations cost model (OpCost...

  6. Application of SIGGS to Project PRIME: A General Systems Approach to Evaluation of Mainstreaming.

    ERIC Educational Resources Information Center

    Frick, Ted

    The use of the systems approach in educational inquiry is not new, and the models of input/output, input/process/product, and cybernetic systems have been widely used. The general systems model is an extension of all these, adding the dimension of environmental influence on the system as well as system influence on the environment. However, if the…

  7. Understanding virtual water flows: A multiregion input-output case study of Victoria

    NASA Astrophysics Data System (ADS)

    Lenzen, Manfred

    2009-09-01

    This article explains and interprets virtual water flows from the well-established perspective of input-output analysis. Using a case study of the Australian state of Victoria, it demonstrates that input-output analysis can enumerate virtual water flows without systematic and unknown truncation errors, an issue which has been largely absent from the virtual water literature. Whereas a simplified flow analysis from a producer perspective would portray Victoria as a net virtual water importer, enumerating the water embodiments across the full supply chain using input-output analysis shows Victoria as a significant net virtual water exporter. This study has succeeded in informing government policy in Australia, which is an encouraging sign that input-output analysis will be able to contribute much value to other national and international applications.

  8. Proceedings of the Technical Forum (3rd) on the F-16 MIL-STD-1750A Microprocessor and the F-16 MIL-STD-1589B Compiler Held at Wright-Patterson AFB, OH on May 5-6, 1982. Volume 2. Specifications,

    DTIC Science & Technology

    1982-05-06

    access 99 6.3.2 Input/output interrupt code 99 register (IOIC) 6.3.2.1 Read input/output interrupt 100 code, level 1 (OAOOOH) 6.3.2.2 Read input...output interrupt 100 code, level 2 (OA001H) 6.3.3 Console input/output 100 6.3.3.1 Clear console (4001H) 100 6.3.3.2 Console output (4000H) 100 6.3.3.3...Console input (COOOH) 100 6.3.3.4 Read console status (C0O01H) 100 6.3.4 Memory fault status register (MFSR) 100 6.3.4.1 Read memory fault register

  9. User's manual for the Simulated Life Analysis of Vehicle Elements (SLAVE) model

    NASA Technical Reports Server (NTRS)

    Paul, D. D., Jr.

    1972-01-01

    The simulated life analysis of vehicle elements model was designed to perform statistical simulation studies for any constant loss rate. The outputs of the model consist of the total number of stages required, stages successfully completing their lifetime, and average stage flight life. This report contains a complete description of the model. Users' instructions and interpretation of input and output data are presented such that a user with little or no prior programming knowledge can successfully implement the program.

  10. Decentralized model reference adaptive control of large flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Fu-Ming; Fong, I-Kong; Lin, Yu-Hwan

    1988-01-01

    A decentralized model reference adaptive control (DMRAC) method is developed for large flexible structures (LFS). The development follows that of a centralized model reference adaptive control for LFS that have been shown to be feasible. The proposed method is illustrated using a simply supported beam with collocated actuators and sensors. Results show that the DMRAC can achieve either output regulation or output tracking with adequate convergence, provided the reference model inputs and their time derivatives are integrable, bounded, and approach zero as t approaches infinity.

  11. Improved methods in neural network-based adaptive output feedback control, with applications to flight control

    NASA Astrophysics Data System (ADS)

    Kim, Nakwan

    Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.

  12. Efficiency measurement of health care organizations: What models are used?

    PubMed Central

    Jaafaripooyan, Ebrahim; Emamgholipour, Sara; Raei, Behzad

    2017-01-01

    Background: Literature abounds with various techniques for efficiency measurement of health care organizations (HCOs), which should be used cautiously and appropriately. The present study aimed at discovering the rules regulating the interplay among the number of inputs, outputs, and decision- making units (DMUs) and identifying all methods used for the measurement of Iranian HCOs and critically appraising all DEA studies on Iranian HCOs in their application of such rules. Methods: The present study employed a systematic search of all studies related to efficiency measurement of Iranian HCOs. A search was conducted in different databases such as PubMed and Scopus between 2001 and 2015 to identify the studies related to the measurement in health care. The retrieved studies passed through a multi-stage (title, abstract, body) filtering process. Data extraction table for each study was completed and included method, number of inputs and outputs, DMUs, and their efficiency score. Results: Various methods were found for efficiency measurement. Overall, 122 studies were retrieved, of which 73 had exclusively employed DEA technique for measuring the efficiency of HCOs in Iran, and 23 with hybrid models (including DEA). Only 6 studies had explicitly used the rules of thumb. Conclusion: The number of inputs, outputs, and DMUs should be cautiously selected in DEA like techniques, as their proportionality can directly affect the discriminatory power of the technique. The given literature seemed to be, to a large extent, unsuccessful in attending to such proportionality. This study collected a list of key rules (of thumb) on the interplay of inputs, outputs, and DMUs, which could be considered by most researchers keen to apply DEA technique.

  13. Modeling and analysis on ring-type piezoelectric transformers.

    PubMed

    Ho, Shine-Tzong

    2007-11-01

    This paper presents an electromechanical model for a ring-type piezoelectric transformer (PT). To establish this model, vibration characteristics of the piezoelectric ring with free boundary conditions are analyzed in advance. Based on the vibration analysis of the piezoelectric ring, the operating frequency and vibration mode of the PT are chosen. Then, electromechanical equations of motion for the PT are derived based on Hamilton's principle, which can be used to simulate the coupled electromechanical system for the transformer. Such as voltage stepup ratio, input impedance, output impedance, input power, output power, and efficiency are calculated by the equations. The optimal load resistance and the maximum efficiency for the PT will be presented in this paper. Experiments also were conducted to verify the theoretical analysis, and a good agreement was obtained.

  14. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    PubMed

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.

  15. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  16. DRI Model of the U.S. Economy -- Model Documentation

    EIA Publications

    1993-01-01

    Provides documentation on Data Resources, Inc., DRI Model of the U.S. Economy and the DRI Personal Computer Input/Output Model. It also describes the theoretical basis, structure and functions of both DRI models; and contains brief descriptions of the models and their equations.

  17. Fundamental Mechanisms of NeuroInformation Processing: Inverse Problems and Spike Processing

    DTIC Science & Technology

    2016-08-04

    platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution...example. We investigated the following nonlinear identification problem: given both the input signal u and the time sequence (tk)k2Z at the output of...from a time sequence is to be contrasted with existing methods for rate-based models in neuroscience. In such models the output of the system is taken

  18. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, Ming‐shu; Whittemore, Donald O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  19. Using Optimization to Improve Test Planning

    DTIC Science & Technology

    2017-09-01

    friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool

  20. Microcomputer Simulation of a Fourier Approach to Optical Wave Propagation

    DTIC Science & Technology

    1992-06-01

    and transformed input in transform domain). 44 Figure 21. SHFTOUTPUT1 ( inverse transform of product of Bessel filter and transformed input). . . . 44...Figure 22. SHFT OUTPUT2 ( inverse transform of product of ,derivative filter and transformed input).. 45 Figure 23. •tIFT OUTPUT (sum of SHFTOUTPUT1...52 Figure 33. SHFT OUTPUT1 at time slice 1 ( inverse transform of product of Bessel filter and transformed input) .... ............. ... 53

  1. Spectral properties of the Preisach hysteresis model with random input. II. Universality classes for symmetric elementary loops

    NASA Astrophysics Data System (ADS)

    Radons, Günter

    2008-06-01

    The Preisach model with symmetric elementary hysteresis loops and uncorrelated input is treated analytically in detail. It is shown that the appearance of long-time tails in the output correlations is a quite general feature of this model. The exponent η of the algebraic decay t-η , which may take any positive value, is determined by the tails of the input and the Preisach density. We identify the system classes leading to identical algebraic tails. These results imply the occurrence of 1/f noise for a large class of hysteretic systems.

  2. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    NASA Astrophysics Data System (ADS)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  3. Global robust output regulation control for cascaded nonlinear systems using the internal model principle

    NASA Astrophysics Data System (ADS)

    Yu, Jiang-Bo; Zhao, Yan; Wu, Yu-Qiang

    2014-04-01

    This article considers the global robust output regulation problem via output feedback for a class of cascaded nonlinear systems with input-to-state stable inverse dynamics. The system uncertainties depend not only on the measured output but also all the unmeasurable states. By introducing an internal model, the output regulation problem is converted into a stabilisation problem for an appropriately augmented system. The designed dynamic controller could achieve the global asymptotic tracking control for a class of time-varying reference signals for the system output while keeping all other closed-loop signals bounded. It is of interest to note that the developed control approach can be applied to the speed tracking control of the fan speed control system. The simulation results demonstrate its effectiveness.

  4. Detection of no-model input-output pairs in closed-loop systems.

    PubMed

    Potts, Alain Segundo; Alvarado, Christiam Segundo Morales; Garcia, Claudio

    2017-11-01

    The detection of no-model input-output (IO) pairs is important because it can speed up the multivariable system identification process, since all the pairs with null transfer functions are previously discarded and it can also improve the identified model quality, thus improving the performance of model based controllers. In the available literature, the methods focus just on the open-loop case, since in this case there is not the effect of the controller forcing the main diagonal in the transfer matrix to one and all the other terms to zero. In this paper, a modification of a previous method able to detect no-model IO pairs in open-loop systems is presented, but adapted to perform this duty in closed-loop systems. Tests are performed by using the traditional methods and the proposed one to show its effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Java-based Graphical User Interface for MAVERIC-II

    NASA Technical Reports Server (NTRS)

    Seo, Suk Jai

    2005-01-01

    A computer program entitled "Marshall Aerospace Vehicle Representation in C II, (MAVERIC-II)" is a vehicle flight simulation program written primarily in the C programming language. It is written by James W. McCarter at NASA/Marshall Space Flight Center. The goal of the MAVERIC-II development effort is to provide a simulation tool that facilitates the rapid development of high-fidelity flight simulations for launch, orbital, and reentry vehicles of any user-defined configuration for all phases of flight. MAVERIC-II has been found invaluable in performing flight simulations for various Space Transportation Systems. The flexibility provided by MAVERIC-II has allowed several different launch vehicles, including the Saturn V, a Space Launch Initiative Two-Stage-to-Orbit concept and a Shuttle-derived launch vehicle, to be simulated during ascent and portions of on-orbit flight in an extremely efficient manner. It was found that MAVERIC-II provided the high fidelity vehicle and flight environment models as well as the program modularity to allow efficient integration, modification and testing of advanced guidance and control algorithms. In addition to serving as an analysis tool for techno logy development, many researchers have found MAVERIC-II to be an efficient, powerful analysis tool that evaluates guidance, navigation, and control designs, vehicle robustness, and requirements. MAVERIC-II is currently designed to execute in a UNIX environment. The input to the program is composed of three segments: 1) the vehicle models such as propulsion, aerodynamics, and guidance, navigation, and control 2) the environment models such as atmosphere and gravity, and 3) a simulation framework which is responsible for executing the vehicle and environment models and propagating the vehicle s states forward in time and handling user input/output. MAVERIC users prepare data files for the above models and run the simulation program. They can see the output on screen and/or store in files and examine the output data later. Users can also view the output stored in output files by calling a plotting program such as gnuplot. A typical scenario of the use of MAVERIC consists of three-steps; editing existing input data files, running MAVERIC, and plotting output results.

  6. Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2015-12-01

    For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. Aircraft Fault Detection Using Real-Time Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2016-01-01

    A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.

  8. Method and apparatus for measuring response time

    DOEpatents

    Johanson, Edward W.; August, Charles

    1985-01-01

    A method of measuring the response time of an electrical instrument which generates an output signal in response to the application of a specified input, wherein the output signal varies as a function of time and when subjected to a step input approaches a steady-state value, comprises the steps of: (a) applying a step input of predetermined value to the electrical instrument to generate an output signal; (b) simultaneously starting a timer; (c) comparing the output signal to a reference signal to generate a stop signal when the output signal is substantially equal to the reference signal, the reference signal being a specified percentage of the steady-state value of the output signal corresponding to the predetermined value of the step input; and (d) applying the stop signal when generated to stop the timer.

  9. Method and apparatus for measuring response time

    DOEpatents

    Johanson, E.W.; August, C.

    1983-08-11

    A method of measuring the response time of an electrical instrument which generates an output signal in response to the application of a specified input, wherein the output signal varies as a function of time and when subjected to a step input approaches a steady-state value, comprises the steps of: (a) applying a step input of predetermined value to the electrical instrument to generate an output signal; (b) simultaneously starting a timer; (c) comparing the output signal to a reference signal to generate a stop signal when the output signal is substantially equal to the reference signal, the reference signal being a specified percentage of the steady-state value of the output signal corresponding to the predetermined value of the step input; and (d) applying the stop signal when generated to stop the timer.

  10. Current Source Logic Gate

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor)

    2017-01-01

    A current source logic gate with depletion mode field effect transistor ("FET") transistors and resistors may include a current source, a current steering switch input stage, and a resistor divider level shifting output stage. The current source may include a transistor and a current source resistor. The current steering switch input stage may include a transistor to steer current to set an output stage bias point depending on an input logic signal state. The resistor divider level shifting output stage may include a first resistor and a second resistor to set the output stage point and produce valid output logic signal states. The transistor of the current steering switch input stage may function as a switch to provide at least two operating points.

  11. 2 micron femtosecond fiber laser

    DOEpatents

    Liu, Jian; Wan, Peng; Yang, Lihmei

    2014-07-29

    Methods and systems for generating femtosecond fiber laser pulses are disclose, including generating a signal laser pulse from a seed laser oscillator; using a first amplifier stage comprising an input and an output, wherein the signal laser pulse is coupled into the input of the first stage amplifier and the output of the first amplifier stage emits an amplified and stretched signal laser pulse; using an amplifier chain comprising an input and an output, wherein the amplified and stretched signal laser pulse from the output of the first amplifier stage is coupled into the input of the amplifier chain and the output of the amplifier chain emits a further amplified, stretched signal laser pulse. Other embodiments are described and claimed.

  12. On-chip remote charger model using plasmonic island circuit

    NASA Astrophysics Data System (ADS)

    Ali, J.; Youplao, P.; Pornsuwancharoen, N.; Aziz, M. S.; Chiangga, S.; Amiri, I. S.; Punthawanunt, S.; Singh, G.; Yupapin, P.

    2018-06-01

    We propose the remote charger model using the light fidelity (LiFi) transmission and integrate microring resonator circuit. It consists of the stacked layers of silicon-graphene-gold materials known as a plasmonic island placed at the center of the modified add-drop filter. The input light power from the remote LiFi can enter into the island via a silicon waveguide. The optimized input power is obtained by the coupled micro-lens on the silicon surface. The induced electron mobility generated in the gold layer by the interfacing layer between silicon-graphene. This is the reversed interaction of the whispering gallery mode light power of the microring system, in which the generated power is fed back into the microring circuit. The electron mobility is the required output and obtained at the device ports and characterized for the remote current source applications. The obtained calculation results have shown that the output current of ∼2.5 × 10-11 AW-1, with the gold height of 1.0 μm and the input power of 5.0 W is obtained at the output port, which is shown the potential application for a short range free pace remote charger.

  13. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  14. Reduced-order modeling for hyperthermia control.

    PubMed

    Potocki, J K; Tharp, H S

    1992-12-01

    This paper analyzes the feasibility of using reduced-order modeling techniques in the design of multiple-input, multiple-output (MIMO) hyperthermia temperature controllers. State space thermal models are created based upon a finite difference expansion of the bioheat transfer equation model of a scanned focused ultrasound system (SFUS). These thermal state space models are reduced using the balanced realization technique, and an order reduction criterion is tabulated. Results show that a drastic reduction in model dimension can be achieved using the balanced realization. The reduced-order model is then used to design a reduced-order optimal servomechanism controller for a two-scan input, two thermocouple output tissue model. In addition, a full-order optimal servomechanism controller is designed for comparison and validation purposes. These two controllers are applied to a variety of perturbed tissue thermal models to test the robust nature of the reduced-order controller. A comparison of the two controllers validates the use of open-loop balanced reduced-order models in the design of MIMO hyperthermia controllers.

  15. 40 CFR 1065.210 - Work input and output sensors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Work input and output sensors. 1065... Ambient Conditions § 1065.210 Work input and output sensors. (a) Application. Use instruments as specified... sensors, transducers, and meters that meet the specifications in Table 1 of § 1065.205. Note that your...

  16. 40 CFR 1065.210 - Work input and output sensors.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Work input and output sensors. 1065... Ambient Conditions § 1065.210 Work input and output sensors. (a) Application. Use instruments as specified... sensors, transducers, and meters that meet the specifications in Table 1 of § 1065.205. Note that your...

  17. 40 CFR 1065.210 - Work input and output sensors.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Work input and output sensors. 1065... Ambient Conditions § 1065.210 Work input and output sensors. (a) Application. Use instruments as specified... sensors, transducers, and meters that meet the specifications in Table 1 of § 1065.205. Note that your...

  18. Transforming the Way We Teach Function Transformations

    ERIC Educational Resources Information Center

    Faulkenberry, Eileen Durand; Faulkenberry, Thomas J.

    2010-01-01

    In this article, the authors discuss "function," a well-defined rule that relates inputs to outputs. They have found that by using the input-output definition of "function," they can examine transformations of functions simply by looking at changes to input or output and the respective changes to the graph. Applying transformations to the input…

  19. Methods to Register Models and Input/Output Parameters for Integrated Modeling

    EPA Science Inventory

    Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s kno...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinthavali, Madhu Sudhan; Campbell, Steven L

    This paper presents an analytical model for wireless power transfer system used in electric vehicle application. The equivalent circuit model for each major component of the system is described, including the input voltage source, resonant network, transformer, nonlinear diode rectifier load, etc. Based on the circuit model, the primary side compensation capacitance, equivalent input impedance, active / reactive power are calculated, which provides a guideline for parameter selection. Moreover, the voltage gain curve from dc output to dc input is derived as well. A hardware prototype with series-parallel resonant stage is built to verify the developed model. The experimental resultsmore » from the hardware are compared with the model predicted results to show the validity of the model.« less

  1. 670 GHz Schottky Diode Based Subharmonic Mixer with CPW Circuits and 70 GHz IF

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Goutam (Inventor); Schlecht, Erich T. (Inventor); Lee, Choonsup (Inventor); Lin, Robert H. (Inventor); Gill, John J. (Inventor); Sin, Seth (Inventor); Mehdi, Imran (Inventor)

    2014-01-01

    A coplanar waveguide (CPW) based subharmonic mixer working at 670 GHz using GaAs Schottky diodes. One example of the mixer has a LO input, an RF input and an IF output. Another possible mixer has a LO input, and IF input and an RF output. Each input or output is connected to a coplanar waveguide with a matching network. A pair of antiparallel diodes provides a signal at twice the LO frequency, which is then mixed with a second signal to provide signals having sum and difference frequencies. The output signal of interest is received after passing through a bandpass filter tuned to the frequency range of interest.

  2. The SYSGEN user package

    NASA Technical Reports Server (NTRS)

    Carlson, C. R.

    1981-01-01

    The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.

  3. Regenerative braking device with rotationally mounted energy storage means

    DOEpatents

    Hoppie, Lyle O.

    1982-03-16

    A regenerative braking device for an automotive vehicle includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (30) and an output shaft (32), clutches (50, 56) and brakes (52, 58) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. In a second embodiment the clutches and brakes are dispensed with and the variable ratio transmission is connected directly across the input and output shafts. In both embodiments the rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft rotates faster or relative to the output shaft and are torsionally relaxed to deliver energy to the vehicle when the output shaft rotates faster or relative to the input shaft.

  4. Dual side control for inductive power transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron

    An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less

  5. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  6. Laminated piezoelectric transformer

    NASA Technical Reports Server (NTRS)

    Vazquez Carazo, Alfredo (Inventor)

    2006-01-01

    A laminated piezoelectric transformer is provided using the longitudinal vibration modes for step-up voltage conversion applications. The input portions are polarized to deform in a longitudinal plane and are bonded to an output portion. The deformation of the input portions is mechanically coupled to the output portion, which deforms in the same longitudinal direction relative to the input portion. The output portion is polarized in the thickness direction relative its electrodes, and piezoelectrically generates a stepped-up output voltage.

  7. Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.

    PubMed

    Gilson, Matthieu

    2018-04-01

    Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.

  8. User Guide for HUFPrint, A Tabulation and Visualization Utility for the Hydrogeologic-Unit Flow (HUF) Package of MODFLOW

    USGS Publications Warehouse

    Banta, Edward R.; Provost, Alden M.

    2008-01-01

    This report documents HUFPrint, a computer program that extracts and displays information about model structure and hydraulic properties from the input data for a model built using the Hydrogeologic-Unit Flow (HUF) Package of the U.S. Geological Survey's MODFLOW program for modeling ground-water flow. HUFPrint reads the HUF Package and other MODFLOW input files, processes the data by hydrogeologic unit and by model layer, and generates text and graphics files useful for visualizing the data or for further processing. For hydrogeologic units, HUFPrint outputs such hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, vertical hydraulic conductivity or anisotropy, specific storage, specific yield, and hydraulic-conductivity depth-dependence coefficient. For model layers, HUFPrint outputs such effective hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, specific storage, primary direction of anisotropy, and vertical conductance. Text files tabulating hydraulic properties by hydrogeologic unit, by model layer, or in a specified vertical section may be generated. Graphics showing two-dimensional cross sections and one-dimensional vertical sections at specified locations also may be generated. HUFPrint reads input files designed for MODFLOW-2000 or MODFLOW-2005.

  9. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less

  10. The Budget and Economic Outlook: Fiscal Years 2008 to 2017

    DTIC Science & Technology

    2007-01-01

    of output to hours worked in the nonfarm business sector . Total, Total, 1950- 1974- 1982- 1991- 2002- 1950- 2007- 2013- 2007- 1973 1981 1990 2001 2006...Business Sectord Overall Economy Nonfarm Business Sector TFP adjustments Contributions to the Growth of Potential Potential Output Potential Labor Force...The primary labor input in CBO’s model for potential output, potential hours worked in the nonfarm business sector , is projected to grow at an average

  11. An update of input instructions to TEMOD

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The theory and operation of a FORTRAN 4 computer code, designated as TEMOD, used to calcuate tubular thermoelectric generator performance is described in WANL-TME-1906. The original version of TEMOD was developed in 1969. A description is given of additions to the mathematical model and an update of the input instructions to the code. Although the basic mathematical model described in WANL-TME-1906 has remained unchanged, a substantial number of input/output options were added to allow completion of module performance parametrics as required in support of the compact thermoelectric converter system technology program.

  12. Forecasting hotspots using predictive visual analytics approach

    DOEpatents

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  13. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  14. A framework to analyze emissions implications of ...

    EPA Pesticide Factsheets

    Future year emissions depend highly on the evolution of the economy, technology and current and future regulatory drivers. A scenario framework was adopted to analyze various technology development pathways and societal change while considering existing regulations and future uncertainty in regulations and evaluate resulting emissions growth patterns. The framework integrates EPA’s energy systems model with an economic Input-Output (I/O) Life Cycle Assessment model. The EPAUS9r MARKAL database is assembled from a set of technologies to represent the U.S. energy system within MARKAL bottom-up technology rich energy modeling framework. The general state of the economy and consequent demands for goods and services from these sectors are taken exogenously in MARKAL. It is important to characterize exogenous inputs about the economy to appropriately represent the industrial sector outlook for each of the scenarios and case studies evaluated. An economic input-output (I/O) model of the US economy is constructed to link up with MARKAL. The I/O model enables user to change input requirements (e.g. energy intensity) for different sectors or the share of consumer income expended on a given good. This gives end-users a mechanism for modeling change in the two dimensions of technological progress and consumer preferences that define the future scenarios. The framework will then be extended to include environmental I/O framework to track life cycle emissions associated

  15. Manual for LS-DYNA Wood Material Model 143

    DOT National Transportation Integrated Search

    2007-08-01

    An elastoplastic damage model with rate effects was developed for wood and was implemented into LS-DYNA, a commercially available finite element code. This manual documents the theory of the wood material model, describes the LS-DYNA input and output...

  16. Prediction of Layer Thickness in Molten Borax Bath with Genetic Evolutionary Programming

    NASA Astrophysics Data System (ADS)

    Taylan, Fatih

    2011-04-01

    In this study, the vanadium carbide coating in molten borax bath process is modeled by evolutionary genetic programming (GEP) with bath composition (borax percentage, ferro vanadium (Fe-V) percentage, boric acid percentage), bath temperature, immersion time, and layer thickness data. Five inputs and one output data exist in the model. The percentage of borax, Fe-V, and boric acid, temperature, and immersion time parameters are used as input data and the layer thickness value is used as output data. For selected bath components, immersion time, and temperature variables, the layer thicknesses are derived from the mathematical expression. The results of the mathematical expressions are compared to that of experimental data; it is determined that the derived mathematical expression has an accuracy of 89%.

  17. Development and testing of controller performance evaluation methodology for multi-input/multi-output digital control systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek

    1991-01-01

    Described here is the development and implementation of on-line, near real time controller performance evaluation (CPE) methods capability. Briefly discussed are the structure of data flow, the signal processing methods used to process the data, and the software developed to generate the transfer functions. This methodology is generic in nature and can be used in any type of multi-input/multi-output (MIMO) digital controller application, including digital flight control systems, digitally controlled spacecraft structures, and actively controlled wind tunnel models. Results of applying the CPE methodology to evaluate (in near real time) MIMO digital flutter suppression systems being tested on the Rockwell Active Flexible Wing (AFW) wind tunnel model are presented to demonstrate the CPE capability.

  18. Adaptive nonlinear control for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Black, William S.

    We present the background and motivation for ground vehicle autonomy, and focus on uses for space-exploration. Using a simple design example of an autonomous ground vehicle we derive the equations of motion. After providing the mathematical background for nonlinear systems and control we present two common methods for exactly linearizing nonlinear systems, feedback linearization and backstepping. We use these in combination with three adaptive control methods: model reference adaptive control, adaptive sliding mode control, and extremum-seeking model reference adaptive control. We show the performances of each combination through several simulation results. We then consider disturbances in the system, and design nonlinear disturbance observers for both single-input-single-output and multi-input-multi-output systems. Finally, we show the performance of these observers with simulation results.

  19. Sociable Interfaces

    DTIC Science & Technology

    2005-01-01

    Interface Compatibility); the tool is written in Ocaml [10], and the symbolic algorithms for interface compatibility and refinement are built on top...automata for a fire detection and reporting system. be encoded in the input language of the tool TIC. The refinement of sociable interfaces is discussed...are closely related to the I/O Automata Language (IOA) of [11]. Interface models are games between Input and Output, and in the models, it is es

  20. Monitoring the biomechanics of a wheelchair sprinter racing the 100 m final at the 2016 Paralympic Games

    NASA Astrophysics Data System (ADS)

    Barbosa, Tiago M.; Coelho, Eduarda

    2017-07-01

    The aim was to run a case study of the biomechanics of a wheelchair sprinter racing the 100 m final at the 2016 Paralympic Games. Stroke kinematics was measured by video analysis in each 20 m split. Race kinetics was estimated by employing an analytical model that encompasses the computation of the rolling friction, drag, energy output and energy input. A maximal average speed of 6.97 m s-1 was reached in the last split. It was estimated that the contributions of the rolling friction and drag force would account for 54% and 46% of the total resistance at maximal speed, respectively. Energy input and output increased over the event. However, we failed to note a steady state or any impairment of the energy input and output in the last few metres of the race. Data suggest that the 100 m is too short an event for the sprinter to be able to achieve his maximal power in such a distance.

  1. Impact of Infralimbic Inputs on Intercalated Amygdale Neurons: A Biophysical Modeling Study

    ERIC Educational Resources Information Center

    Li, Guoshi; Amano, Taiju; Pare, Denis; Nair, Satish S.

    2011-01-01

    Intercalated (ITC) amygdala neurons regulate fear expression by controlling impulse traffic between the input (basolateral amygdala; BLA) and output (central nucleus; Ce) stations of the amygdala for conditioned fear responses. Previously, stimulation of the infralimbic (IL) cortex was found to reduce fear expression and the responsiveness of Ce…

  2. Characteristics and Innovations in American Education of Relevance for Indian Education.

    ERIC Educational Resources Information Center

    Stambler, Moses

    American responses to educational problems faced around the globe can serve as models for developing nations. The following characteristics of American education with particular relevance for education in developing nations have been organized as inputs, structures and strategies, and outputs. Inputs to the system of American education, defined in…

  3. A computational model for the prediction of jet entrainment in the vicinity of nozzle boattails (The BOAT code)

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.

    1978-01-01

    The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.

  4. Modeling and Analysis of Information Product Maps

    ERIC Educational Resources Information Center

    Heien, Christopher Harris

    2012-01-01

    Information Product Maps are visual diagrams used to represent the inputs, processing, and outputs of data within an Information Manufacturing System. A data unit, drawn as an edge, symbolizes a grouping of raw data as it travels through this system. Processes, drawn as vertices, transform each data unit input into various forms prior to delivery…

  5. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    PubMed

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  6. Thermomechanical conditions and stresses on the friction stir welding tool

    NASA Astrophysics Data System (ADS)

    Atthipalli, Gowtam

    Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.

  7. Interactive Visual Analytics Approch for Exploration of Geochemical Model Simulations with Different Parameter Sets

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2015-04-01

    Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.

  8. Preserving information in neural transmission.

    PubMed

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  9. High power RF solid state power amplifier system

    NASA Technical Reports Server (NTRS)

    Sims, III, William Herbert (Inventor); Chavers, Donald Gregory (Inventor); Richeson, James J. (Inventor)

    2011-01-01

    A high power, high frequency, solid state power amplifier system includes a plurality of input multiple port splitters for receiving a high-frequency input and for dividing the input into a plurality of outputs and a plurality of solid state amplifier units. Each amplifier unit includes a plurality of amplifiers, and each amplifier is individually connected to one of the outputs of multiport splitters and produces a corresponding amplified output. A plurality of multiport combiners combine the amplified outputs of the amplifiers of each of the amplifier units to a combined output. Automatic level control protection circuitry protects the amplifiers and maintains a substantial constant amplifier power output.

  10. Signal Processing Equipment and Techniques for Use in Measuring Ocean Acoustic Multipath Structures

    DTIC Science & Technology

    1983-12-01

    Demodulator 3.4 Digital Demodulator 3.4.1 Number of Bits in the Input A/D Converter Quantization Effects The Demodulator Output Filter Effects of... power caused by ignoring cross spectral term a) First order Butterworth filter b) Second order Butterworth filter 48 3.4 Ordering of e...spectrum 59 3.7 Multiplying D/A Converter input and output spectra a) Input b) Output 60 3.8 Demodulator output spectrum prior to filtering 63

  11. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  12. System Identification Methods for Aircraft Flight Control Development and Validation

    DOT National Transportation Integrated Search

    1995-10-01

    System-identification methods compose a mathematical model, or series of models, : from measurements of inputs and outputs of dynamic systems. This paper : discusses the use of frequency-domain system-identification methods for the : development and ...

  13. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Justin; Hund, Lauren

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less

  14. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    NASA Astrophysics Data System (ADS)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  15. Input-output identification of controlled discrete manufacturing systems

    NASA Astrophysics Data System (ADS)

    Estrada-Vargas, Ana Paula; López-Mellado, Ernesto; Lesage, Jean-Jacques

    2014-03-01

    The automated construction of discrete event models from observations of external system's behaviour is addressed. This problem, often referred to as system identification, allows obtaining models of ill-known (or even unknown) systems. In this article, an identification method for discrete event systems (DESs) controlled by a programmable logic controller is presented. The method allows processing a large quantity of observed long sequences of input/output signals generated by the controller and yields an interpreted Petri net model describing the closed-loop behaviour of the automated DESs. The proposed technique allows the identification of actual complex systems because it is sufficiently efficient and well adapted to cope with both the technological characteristics of industrial controllers and data collection requirements. Based on polynomial-time algorithms, the method is implemented as an efficient software tool which constructs and draws the model automatically; an overview of this tool is given through a case study dealing with an automated manufacturing system.

  16. Model reduction of nonsquare linear MIMO systems using multipoint matrix continued-fraction expansions

    NASA Technical Reports Server (NTRS)

    Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San

    1994-01-01

    This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.

  17. Self-organization of globally continuous and locally distributed information representation.

    PubMed

    Wada, Koji; Kurata, Koji; Okada, Masato

    2004-01-01

    A number of findings suggest that the preferences of neighboring neurons in the inferior temporal (IT) cortex of macaque monkeys tend to be similar. However, a recent study reports convincingly that the preferences of neighboring neurons actually differ. These findings seem contradictory. To explain this conflict, we propose a new view of information representation in the IT cortex. This view takes into account sparse and local neuronal excitation. Since the excitation is sparse, information regarding visual objects seems to be encoded in a distributed manner. The local excitation of neurons coincides with the classical notion of a column structure. Our model consists of input layer and output layer. The main difference from conventional models is that the output layer has local and random intra-layer connections. In this paper, we adopt two rings embedded in three-dimensional space as an input signal space, and examine how resultant information representation depends on the distance between two rings that is denoted as D. We show that there exists critical value for the distance Dc. When D > Dc the output layer becomes able to form the column structure, this model can obtain the distributed representation within the column. While the output layer acquires the conventional information representation observed in the V1 cortex when D < Dc. Moreover, we consider the origin of the difference between information representation of the V1 cortex and that of the IT cortex. Our finding suggests that the difference in the information representations between the V1 and the IT cortices could be caused by difference between the input space structures.

  18. Low voltage to high voltage level shifter and related methods

    NASA Technical Reports Server (NTRS)

    Mentze, Erik J. (Inventor); Buck, Kevin M. (Inventor); Hess, Herbert L. (Inventor); Cox, David F. (Inventor)

    2006-01-01

    A shifter circuit comprises a high and low voltage buffer stages and an output buffer stage. The high voltage buffer stage comprises multiple transistors arranged in a transistor stack having a plurality of intermediate nodes connecting individual transistors along the stack. The transistor stack is connected between a voltage level being shifted to and an input voltage. An inverter of this stage comprises multiple inputs and an output. Inverter inputs are connected to a respective intermediate node of the transistor stack. The low voltage buffer stage has an input connected to the input voltage and an output, and is operably connected to the high voltage buffer stage. The low voltage buffer stage is connected between a voltage level being shifted away from and a lower voltage. The output buffer stage is driven by the outputs of the high voltage buffer stage inverter and the low voltage buffer stage.

  19. Scaling of global input-output networks

    NASA Astrophysics Data System (ADS)

    Liang, Sai; Qi, Zhengling; Qu, Shen; Zhu, Ji; Chiu, Anthony S. F.; Jia, Xiaoping; Xu, Ming

    2016-06-01

    Examining scaling patterns of networks can help understand how structural features relate to the behavior of the networks. Input-output networks consist of industries as nodes and inter-industrial exchanges of products as links. Previous studies consider limited measures for node strengths and link weights, and also ignore the impact of dataset choice. We consider a comprehensive set of indicators in this study that are important in economic analysis, and also examine the impact of dataset choice, by studying input-output networks in individual countries and the entire world. Results show that Burr, Log-Logistic, Log-normal, and Weibull distributions can better describe scaling patterns of global input-output networks. We also find that dataset choice has limited impacts on the observed scaling patterns. Our findings can help examine the quality of economic statistics, estimate missing data in economic statistics, and identify key nodes and links in input-output networks to support economic policymaking.

  20. Reconstruction of nonlinear wave propagation

    DOEpatents

    Fleischer, Jason W; Barsi, Christopher; Wan, Wenjie

    2013-04-23

    Disclosed are systems and methods for characterizing a nonlinear propagation environment by numerically propagating a measured output waveform resulting from a known input waveform. The numerical propagation reconstructs the input waveform, and in the process, the nonlinear environment is characterized. In certain embodiments, knowledge of the characterized nonlinear environment facilitates determination of an unknown input based on a measured output. Similarly, knowledge of the characterized nonlinear environment also facilitates formation of a desired output based on a configurable input. In both situations, the input thus characterized and the output thus obtained include features that would normally be lost in linear propagations. Such features can include evanescent waves and peripheral waves, such that an image thus obtained are inherently wide-angle, farfield form of microscopy.

  1. Reduced order modeling and active flow control of an inlet duct

    NASA Astrophysics Data System (ADS)

    Ge, Xiaoqing

    Many aerodynamic applications require the modeling of compressible flows in or around a body, e.g., the design of aircraft, inlet or exhaust duct, wind turbines, or tall buildings. Traditional methods use wind tunnel experiments and computational fluid dynamics (CFD) to investigate the spatial and temporal distribution of the flows. Although they provide a great deal of insight into the essential characteristics of the flow field, they are not suitable for control analysis and design due to the high physical/computational cost. Many model reduction methods have been studied to reduce the complexity of the flow model. There are two main approaches: linearization based input/output modeling and proper orthogonal decomposition (POD) based model reduction. The former captures mostly the local behavior near a steady state, which is suitable to model laminar flow dynamics. The latter obtains a reduced order model by projecting the governing equation onto an "optimal" subspace and is able to model complex nonlinear flow phenomena. In this research we investigate various model reduction approaches and compare them in flow modeling and control design. We propose an integrated model-based control methodology and apply it to the reduced order modeling and active flow control of compressible flows within a very aggressive (length to exit diameter ratio, L/D, of 1.5) inlet duct and its upstream contraction section. The approach systematically applies reduced order modeling, estimator design, sensor placement and control design to improve the aerodynamic performance. The main contribution of this work is the development of a hybrid model reduction approach that attempts to combine the best features of input/output model identification and POD method. We first identify a linear input/output model by using a subspace algorithm. We next project the difference between CFD response and the identified model response onto a set of POD basis. This trajectory is fit to a nonlinear dynamical model to augment the linear input/output model. Thus, the full system is decomposed into a dominant linear subsystem and a low order nonlinear subsystem. The hybrid model is then used for control design and compared with other modeling methods in CFD simulations. Numerical results indicate that the hybrid model accurately predicts the nonlinear behavior of the flow for a 2D diffuser contraction section model. It also performs best in terms of feedback control design and learning control. Since some outputs of interest (e.g., the AIP pressure recovery) are not observable during normal operations, static and dynamic estimators are designed to recreate the information from available sensor measurements. The latter also provides a state estimation for feedback controller. Based on the reduced order models and estimators, different controllers are designed to improve the aerodynamic performance of the contraction section and inlet duct. The integrated control methodology is evaluated with CFD simulations. Numerical results demonstrate the feasibility and efficacy of the active flow control based on reduced order models. Our reduced order models not only generate a good approximation of the nonlinear flow dynamics over a wide input range, but also help to design controllers that significantly improve the flow response. The tools developed for model reduction, estimator and control design can also be applied to wind tunnel experiment.

  2. Mixed H2/Hinfinity output-feedback control of second-order neutral systems with time-varying state and input delays.

    PubMed

    Karimi, Hamid Reza; Gao, Huijun

    2008-07-01

    A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.

  3. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  4. The Correlation of Human Capital on Costs of Air Force Acquisition Programs

    DTIC Science & Technology

    2009-03-01

    6.78 so our model does not exhibit the presence of multi-collinearity. We empirically tested for heteroskedasticity using the Breusch - Pagan -Godfrey...inputs to outputs. The output in this study is the average cost overrun of Aeronautical Systems Center research, development, test , and evaluation...32 Pre-Estimation Specification Tests ............................................................................34 Post

  5. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  6. A data mining framework for time series estimation.

    PubMed

    Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin

    2010-04-01

    Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.

  7. Economic analysis of electronic waste recycling: modeling the cost and revenue of a materials recovery facility in California.

    PubMed

    Kang, Hai-Yong; Schoenung, Julie M

    2006-03-01

    The objectives of this study are to identify the various techniques used for treating electronic waste (e-waste) at material recovery facilities (MRFs) in the state of California and to investigate the costs and revenue drivers for these techniques. The economics of a representative e-waste MRF are evaluated by using technical cost modeling (TCM). MRFs are a critical element in the infrastructure being developed within the e-waste recycling industry. At an MRF, collected e-waste can become marketable output products including resalable systems/components and recyclable materials such as plastics, metals, and glass. TCM has two main constituents, inputs and outputs. Inputs are process-related and economic variables, which are directly specified in each model. Inputs can be divided into two parts: inputs for cost estimation and for revenue estimation. Outputs are the results of modeling and consist of costs and revenues, distributed by unit operation, cost element, and revenue source. The results of the present analysis indicate that the largest cost driver for the operation of the defined California e-waste MRF is the materials cost (37% of total cost), which includes the cost to outsource the recycling of the cathode ray tubes (CRTs) (dollar 0.33/kg); the second largest cost driver is labor cost (28% of total cost without accounting for overhead). The other cost drivers are transportation, building, and equipment costs. The most costly unit operation is cathode ray tube glass recycling, and the next are sorting, collecting, and dismantling. The largest revenue source is the fee charged to the customer; metal recovery is the second largest revenue source.

  8. Highly efficient model updating for structural condition assessment of large-scale bridges.

    DOT National Transportation Integrated Search

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  9. Input-output characterization of an ultrasonic testing system by digital signal analysis

    NASA Technical Reports Server (NTRS)

    Williams, J. H., Jr.; Lee, S. S.; Karagulle, H.

    1986-01-01

    Ultrasonic test system input-output characteristics were investigated by directly coupling the transmitting and receiving transducers face to face without a test specimen. Some of the fundamentals of digital signal processing were summarized. Input and output signals were digitized by using a digital oscilloscope, and the digitized data were processed in a microcomputer by using digital signal-processing techniques. The continuous-time test system was modeled as a discrete-time, linear, shift-invariant system. In estimating the unit-sample response and frequency response of the discrete-time system, it was necessary to use digital filtering to remove low-amplitude noise, which interfered with deconvolution calculations. A digital bandpass filter constructed with the assistance of a Blackman window and a rectangular time window were used. Approximations of the impulse response and the frequency response of the continuous-time test system were obtained by linearly interpolating the defining points of the unit-sample response and the frequency response of the discrete-time system. The test system behaved as a linear-phase bandpass filter in the frequency range 0.6 to 2.3 MHz. These frequencies were selected in accordance with the criterion that they were 6 dB below the maximum peak of the amplitude of the frequency response. The output of the system to various inputs was predicted and the results were compared with the corresponding measurements on the system.

  10. Development of a green remediation tool in Japan.

    PubMed

    Yasutaka, Tetsuo; Zhang, Hong; Murayama, Koki; Hama, Yoshihito; Tsukada, Yasuhisa; Furukawa, Yasuhide

    2016-09-01

    The green remediation assessment tool for Japan (GRATJ) presented in this study is a spreadsheet-based software package developed to facilitate comparisons of the environmental impacts associated with various countermeasures against contaminated soil in Japan. This tool uses a life-cycle assessment-based model to calculate inventory inputs/outputs throughout the activity life cycle during remediation. Processes of 14 remediation methods for heavy metal contamination and 12 for volatile organic compound contamination are built into the tool. This tool can evaluate 130 inventory inputs/outputs and easily integrate those inputs/outputs into 9 impact categories, 4 integrated endpoints, and 1 index. Comparative studies can be performed by entering basic data associated with a target site. The integrated results can be presented in a simpler and clearer manner than the results of an inventory analysis. As a case study, an arsenic-contaminated soil remediation site was examined using this tool. Results showed that the integrated environmental impacts were greater with onsite remediation methods than with offsite ones. Furthermore, the contributions of CO2 to global warming, SO2 to urban air pollution, and crude oil to resource consumption were greater than other inventory inputs/outputs. The GRATJ has the potential to improve green remediation and can serve as a valuable tool for decision makers and practitioners in selecting countermeasures in Japan. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Combustion Control System Design of Diesel Engine via ASPR based Output Feedback Control Strategy with a PFC

    NASA Astrophysics Data System (ADS)

    Mizumoto, Ikuro; Tsunematsu, Junpei; Fujii, Seiya

    2016-09-01

    In this paper, a design method of an output feedback control system with a simple feedforward input for a combustion model of diesel engine will be proposed based on the almost strictly positive real-ness (ASPR-ness) of the controlled system for a combustion control of diesel engines. A parallel feedforward compensator (PFC) design scheme which renders the resulting augmented controlled system ASPR will also be proposed in order to design a stable output feedback control system for the considered combustion model. The effectiveness of our proposed method will be confirmed through numerical simulations.

  12. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  13. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, David; Hershey, Ronald L.

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less

  14. Formulation d'un modele mathematique par des techniques d'estimation de parametres a partir de donnees de vol pour l'helicoptere Bell 427 et l'avion F/A-18 servant a la recherches en aeroservoelasticite

    NASA Astrophysics Data System (ADS)

    Nadeau-Beaulieu, Michel

    In this thesis, three mathematical models are built from flight test data for different aircraft design applications: a ground dynamics model for the Bell 427 helicopter, a prediction model for the rotor and engine parameters for the same helicopter type and a simulation model for the aeroelastic deflections of the F/A-18. In the ground dynamics application, the model structure is derived from physics where the normal force between the helicopter and the ground is modelled as a vertical spring and the frictional force is modelled with static and dynamic friction coefficients. The ground dynamics model coefficients are optimized to ensure that the model matches the landing data within the FAA (Federal Aviation Administration) tolerance bands for a level D flight simulator. In the rotor and engine application, rotors torques (main and tail), the engine torque and main rotor speed are estimated using a state-space model. The model inputs are nonlinear terms derived from the pilot control inputs and the helicopter states. The model parameters are identified using the subspace method and are further optimised with the Levenberg-Marquardt minimisation algorithm. The model built with the subspace method provides an excellent estimate of the outputs within the FAA tolerance bands. The F/A-18 aeroelastic state-space model is built from flight test. The research concerning this model is divided in two parts. Firstly, the deflection of a given structural surface on the aircraft following a differential ailerons control input is represented by a Multiple Inputs Single Outputs linear model whose inputs are the ailerons positions and the structural surfaces deflections. Secondly, a single state-space model is used to represent the deflection of the aircraft wings and trailing edge flaps following any control input. In this case the model is made non-linear by multiplying model inputs into higher order terms and using these terms as the inputs of the state-space equations. In both cases, the identification method is the subspace method. Most fit coefficients between the estimated and the measured signals are above 73% and most correlation coefficient are higher than 90%.

  15. Real-time quality monitoring in debutanizer column with regression tree and ANFIS

    NASA Astrophysics Data System (ADS)

    Siddharth, Kumar; Pathak, Amey; Pani, Ajaya Kumar

    2018-05-01

    A debutanizer column is an integral part of any petroleum refinery. Online composition monitoring of debutanizer column outlet streams is highly desirable in order to maximize the production of liquefied petroleum gas. In this article, data-driven models for debutanizer column are developed for real-time composition monitoring. The dataset used has seven process variables as inputs and the output is the butane concentration in the debutanizer column bottom product. The input-output dataset is divided equally into a training (calibration) set and a validation (testing) set. The training set data were used to develop fuzzy inference, adaptive neuro fuzzy (ANFIS) and regression tree models for the debutanizer column. The accuracy of the developed models were evaluated by simulation of the models with the validation dataset. It is observed that the ANFIS model has better estimation accuracy than other models developed in this work and many data-driven models proposed so far in the literature for the debutanizer column.

  16. Modeling and control for vibration suppression of a flexible smart structure

    NASA Technical Reports Server (NTRS)

    Dosch, J.; Leo, D.; Inman, D.

    1993-01-01

    Theoretical and experimental results of the modeling and control of a flexible ribbed antenna are presented. The antenna consists of eight flexible ribs which constitutes a smart antenna in the sense that the actuator and sensors are an integral part of the structure. The antenna exhibits closely space and repeated modes, thus multi-input multi-output (MIMO) control is necessary for controllability and observability of the structure. The structure also exhibits mode localization phenomenon and contains post buckled members making an accurate finite element model of the structure difficult to obtain. An identified MIMO minimum order model of the antenna is synthesized from identified single-input single-output (SISO) transfer functions curve fit in the frequency domain. The identified model is used to design a positive position feedback (PPF) controller that increases damping in all of the modes in the targeted frequency range. Due to the accuracy of the open loop model of the antenna, the closed loop response predicted by the identified model correlates well wtih experimental results.

  17. Experimental comparison of conventional and nonlinear model-based control of a mixing tank

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haeggblom, K.E.

    1993-11-01

    In this case study concerning control of a laboratory-scale mixing tank, conventional multiloop single-input single-output (SISO) control is compared with model-based'' control where the nonlinearity and multivariable characteristics of the process are explicitly taken into account. It is shown, especially if the operating range of the process is large, that the two outputs (level and temperature) cannot be adequately controlled by multiloop SISO control even if gain scheduling is used. By nonlinear multiple-input multiple-output (MIMO) control, on the other hand, very good control performance is obtained. The basic approach to nonlinear control used in this study is first to transformmore » the process into a globally linear and decoupled system, and then to design controllers for this system. Because of the properties of the resulting MIMO system, the controller design is very easy. Two nonlinear control system designs based on a steady-state and a dynamic model, respectively, are considered. In the dynamic case, both setpoint tracking and disturbance rejection can be addressed separately.« less

  18. Apparatus and method for detecting and measuring changes in linear relationships between a number of high frequency signals

    DOEpatents

    Bittner, J.W.; Biscardi, R.W.

    1991-03-19

    An electronic measurement circuit is disclosed for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals. 2 figures.

  19. Apparatus and method for detecting and measuring changes in linear relationships between a number of high frequency signals

    DOEpatents

    Bittner, John W.; Biscardi, Richard W.

    1991-01-01

    An electronic measurement circuit for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals.

  20. Transfer functions for protein signal transduction: application to a model of striatal neural plasticity.

    PubMed

    Scheler, Gabriele

    2013-01-01

    We present a novel formulation for biochemical reaction networks in the context of protein signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of "source" species, which are interpreted as input signals. Signals are transmitted to all other species in the system (the "target" species) with a specific delay and with a specific transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and even recalled to build dynamical models on the basis of state changes. By separating the temporal and the magnitudinal domain we can greatly simplify the computational model, circumventing typical problems of complex dynamical systems. The transfer function transformation of biochemical reaction systems can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant novel insights while remaining a fully testable and executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modularizations that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. Remarkably, we found that overall interconnectedness depends on the magnitude of inputs, with higher connectivity at low input concentrations and significant modularization at moderate to high input concentrations. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation.

Top