Sample records for input function general

  1. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    NASA Astrophysics Data System (ADS)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  2. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  3. Generalization and capacity of extensively large two-layered perceptrons.

    PubMed

    Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido

    2002-09-01

    The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.

  4. Multilayer neural networks with extensively many hidden units.

    PubMed

    Rosen-Zvi, M; Engel, A; Kanter, I

    2001-08-13

    The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones.

  5. Cryptographic Boolean Functions with Biased Inputs

    DTIC Science & Technology

    2015-07-31

    theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the

  6. Production Function Geometry with "Knightian" Total Product

    ERIC Educational Resources Information Center

    Truett, Dale B.; Truett, Lila J.

    2007-01-01

    Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…

  7. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  8. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  9. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Business Process Improvement Applied to Written Temporary Duty Travel Orders within the United States Air Force

    DTIC Science & Technology

    1993-12-01

    Generally Accepted Process While neither DoD Directives nor USAF Regulations specify exact mandatory TDY order processing methods, most USAF units...functional input. Finally, TDY order processing functional experts at Hanscom, Los Angeles and McClellan AFBs provided inputs based on their experiences...current electronic auditing capabilities. 81 DTPS Initiative. This DFAS-initiated action to standardize TDY order processing throughout DoD is currently

  11. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  13. Using Linear and Quadratic Functions to Teach Number Patterns in Secondary School

    ERIC Educational Resources Information Center

    Kenan, Kok Xiao-Feng

    2017-01-01

    This paper outlines an approach to definitively find the general term in a number pattern, of either a linear or quadratic form, by using the general equation of a linear or quadratic function. This approach is governed by four principles: (1) identifying the position of the term (input) and the term itself (output); (2) recognising that each…

  14. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  15. The production function

    NASA Astrophysics Data System (ADS)

    Fioretti, Guido

    2007-02-01

    The productions function maps the inputs of a firm or a productive system onto its outputs. This article expounds generalizations of the production function that include state variables, organizational structures and increasing returns to scale. These extensions are needed in order to explain the regularities of the empirical distributions of certain economic variables.

  16. Neural networks: further insights into error function, generalized weights and others

    PubMed Central

    2016-01-01

    The article is a continuum of a previous one providing further insights into the structure of neural network (NN). Key concepts of NN including activation function, error function, learning rate and generalized weights are introduced. NN topology can be visualized with generic plot() function by passing a “nn” class object. Generalized weights assist interpretation of NN model with respect to the independent effect of individual input variables. A large variance of generalized weights for a covariate indicates non-linearity of its independent effect. If generalized weights of a covariate are approximately zero, the covariate is considered to have no effect on outcome. Finally, prediction of new observations can be performed using compute() function. Make sure that the feature variables passed to the compute() function are in the same order to that in the training NN. PMID:27668220

  17. Information processing in dendrites I. Input pattern generalisation.

    PubMed

    Gurney, K N

    2001-10-01

    In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.

  18. An Explicit Linear Filtering Solution for the Optimization of Guidance Systems with Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Stewart, Elwood C.

    1961-01-01

    The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.

  19. Reconfigurable Fault Tolerance for FPGAs

    NASA Technical Reports Server (NTRS)

    Shuler, Robert, Jr.

    2010-01-01

    The invention allows a field-programmable gate array (FPGA) or similar device to be efficiently reconfigured in whole or in part to provide higher capacity, non-redundant operation. The redundant device consists of functional units such as adders or multipliers, configuration memory for the functional units, a programmable routing method, configuration memory for the routing method, and various other features such as block RAM, I/O (random access memory, input/output) capability, dedicated carry logic, etc. The redundant device has three identical sets of functional units and routing resources and majority voters that correct errors. The configuration memory may or may not be redundant, depending on need. For example, SRAM-based FPGAs will need some type of radiation-tolerant configuration memory, or they will need triple-redundant configuration memory. Flash or anti-fuse devices will generally not need redundant configuration memory. Some means of loading and verifying the configuration memory is also required. These are all components of the pre-existing redundant FPGA. This innovation modifies the voter to accept a MODE input, which specifies whether ordinary voting is to occur, or if redundancy is to be split. Generally, additional routing resources will also be required to pass data between sections of the device created by splitting the redundancy. In redundancy mode, the voters produce an output corresponding to the two inputs that agree, in the usual fashion. In the split mode, the voters select just one input and convey this to the output, ignoring the other inputs. In a dual-redundant system (as opposed to triple-redundant), instead of a voter, there is some means to latch or gate a state update only when both inputs agree. In this case, the invention would require modification of the latch or gate so that it would operate normally in redundant mode, and would separately latch or gate the inputs in non-redundant mode.

  20. Characteristic operator functions for quantum input-plant-output models and coherent control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gough, John E.

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entriesmore » that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.« less

  1. AESOP: An interactive computer program for the design of linear quadratic regulators and Kalman filters

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Geyser, L. C.

    1984-01-01

    AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.

  2. Unsupervised segmentation with dynamical units.

    PubMed

    Rao, A Ravishankar; Cecchi, Guillermo A; Peck, Charles C; Kozloski, James R

    2008-01-01

    In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.

  3. Noniterative computation of infimum in H(infinity) optimisation for plants with invariant zeros on the j(omega)-axis

    NASA Technical Reports Server (NTRS)

    Chen, B. M.; Saber, A.

    1993-01-01

    A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.

  4. A new polytopic approach for the unknown input functional observer design

    NASA Astrophysics Data System (ADS)

    Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed

    2018-03-01

    In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.

  5. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    PubMed

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.

  6. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  7. Optimizations of Human Restraint Systems for Short-Period Acceleration

    NASA Technical Reports Server (NTRS)

    Payne, P. R.

    1963-01-01

    A restraint system's main function is to restrain its occupant when his vehicle is subjected to acceleration. If the restraint system is rigid and well-fitting (to eliminate slack) then it will transmit the vehicle acceleration to its occupant without modifying it in any way. Few present-day restraint systems are stiff enough to give this one-to-one transmission characteristic, and depending upon their dynamic characteristics and the nature of the vehicle's acceleration-time history, they will either magnify or attenuate the acceleration. Obviously an optimum restraint system will give maximum attenuation of an input acceleration. In the general case of an arbitrary acceleration input, a computer must be used to determine the optimum dynamic characteristics for the restraint system. Analytical solutions can be obtained for certain simple cases, however, and these cases are considered in this paper, after the concept of dynamic models of the human body is introduced. The paper concludes with a description of an analog computer specially developed for the Air Force to handle completely general mechanical restraint optimization programs of this type, where the acceleration input may be any arbitrary function of time.

  8. Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions

    PubMed Central

    Fee, Michale S.

    2012-01-01

    In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current “time” in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources. PMID:22754501

  9. Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions.

    PubMed

    Fee, Michale S

    2012-01-01

    In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current "time" in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources.

  10. Experimental industrial signal acquisition board in a large scientific device

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangzhen; Ren, Bin

    2018-02-01

    In order to measure the industrial signal of neutrino experiment, a set of general-purpose industrial data acquisition board has been designed. It includes the function of switch signal input and output, and the function of analog signal input. The main components are signal isolation amplifier and filter circuit, ADC circuit, microcomputer systems and isolated communication interface circuit. Through the practical experiments, it shows that the system is flexible, reliable, convenient and economical, and the system has characters of high definition and strong anti-interference ability. Thus, the system fully meets the design requirements.

  11. Towards a general theory of neural computation based on prediction by single neurons.

    PubMed

    Fiorillo, Christopher D

    2008-10-01

    Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.

  12. Synaptic control of the shape of the motoneuron pool input-output function

    PubMed Central

    Heckman, Charles J.

    2017-01-01

    Although motoneurons have often been considered to be fairly linear transducers of synaptic input, recent evidence suggests that strong persistent inward currents (PICs) in motoneurons allow neuromodulatory and inhibitory synaptic inputs to induce large nonlinearities in the relation between the level of excitatory input and motor output. To try to estimate the possible extent of this nonlinearity, we developed a pool of model motoneurons designed to replicate the characteristics of motoneuron input-output properties measured in medial gastrocnemius motoneurons in the decerebrate cat with voltage-clamp and current-clamp techniques. We drove the model pool with a range of synaptic inputs consisting of various mixtures of excitation, inhibition, and neuromodulation. We then looked at the relation between excitatory drive and total pool output. Our results revealed that the PICs not only enhance gain but also induce a strong nonlinearity in the relation between the average firing rate of the motoneuron pool and the level of excitatory input. The relation between the total simulated force output and input was somewhat more linear because of higher force outputs in later-recruited units. We also found that the nonlinearity can be increased by increasing neuromodulatory input and/or balanced inhibitory input and minimized by a reciprocal, push-pull pattern of inhibition. We consider the possibility that a flexible input-output function may allow motor output to be tuned to match the widely varying demands of the normal motor repertoire. NEW & NOTEWORTHY Motoneuron activity is generally considered to reflect the level of excitatory drive. However, the activation of voltage-dependent intrinsic conductances can distort the relation between excitatory drive and the total output of a pool of motoneurons. Using a pool of realistic motoneuron models, we show that pool output can be a highly nonlinear function of synaptic input but linearity can be achieved through adjusting the time course of excitatory and inhibitory synaptic inputs. PMID:28053245

  13. GABAergic neurons in ferret visual cortex participate in functionally specific networks

    PubMed Central

    Wilson, Daniel E.; Smith, Gordon B.; Jacob, Amanda; Walker, Theo; Dimidschstein, Jordane; Fishell, Gord J.; Fitzpatrick, David

    2017-01-01

    Summary Functional circuits in the visual cortex require the coordinated activity of excitatory and inhibitory neurons. Molecular genetic approaches in the mouse have led to the ‘local nonspecific pooling principle’ of inhibitory connectivity, in which inhibitory neurons are untuned for stimulus features due to the random pooling of local inputs. However, it remains unclear whether this principle generalizes to species with a columnar organization of feature selectivity such as carnivores, primates, and humans. Here we use virally-mediated GABAergic-specific GCaMP6f expression to demonstrate that inhibitory neurons in ferret visual cortex respond robustly and selectively to oriented stimuli. We find that the tuning of inhibitory neurons is inconsistent with the local non-specific pooling of excitatory inputs, and that inhibitory neurons exhibit orientation-specific noise correlations with local and distant excitatory neurons. These findings challenge the generality of the non-specific pooling principle for inhibitory neurons, suggesting different rules for functional excitatory-inhibitory interactions in non-murine species. PMID:28279352

  14. The biological function of consciousness

    PubMed Central

    Earl, Brian

    2014-01-01

    This research is an investigation of whether consciousness—one's ongoing experience—influences one's behavior and, if so, how. Analysis of the components, structure, properties, and temporal sequences of consciousness has established that, (1) contrary to one's intuitive understanding, consciousness does not have an active, executive role in determining behavior; (2) consciousness does have a biological function; and (3) consciousness is solely information in various forms. Consciousness is associated with a flexible response mechanism (FRM) for decision-making, planning, and generally responding in nonautomatic ways. The FRM generates responses by manipulating information and, to function effectively, its data input must be restricted to task-relevant information. The properties of consciousness correspond to the various input requirements of the FRM; and when important information is missing from consciousness, functions of the FRM are adversely affected; both of which indicate that consciousness is the input data to the FRM. Qualitative and quantitative information (shape, size, location, etc.) are incorporated into the input data by a qualia array of colors, sounds, and so on, which makes the input conscious. This view of the biological function of consciousness provides an explanation why we have experiences; why we have emotional and other feelings, and why their loss is associated with poor decision-making; why blindsight patients do not spontaneously initiate responses to events in their blind field; why counter-habitual actions are only possible when the intended action is in mind; and the reason for inattentional blindness. PMID:25140159

  15. Transform methods for precision continuum and control models of flexible space structures

    NASA Technical Reports Server (NTRS)

    Lupi, Victor D.; Turner, James D.; Chun, Hon M.

    1991-01-01

    An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.

  16. Transfer functions for protein signal transduction: application to a model of striatal neural plasticity.

    PubMed

    Scheler, Gabriele

    2013-01-01

    We present a novel formulation for biochemical reaction networks in the context of protein signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of "source" species, which are interpreted as input signals. Signals are transmitted to all other species in the system (the "target" species) with a specific delay and with a specific transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and even recalled to build dynamical models on the basis of state changes. By separating the temporal and the magnitudinal domain we can greatly simplify the computational model, circumventing typical problems of complex dynamical systems. The transfer function transformation of biochemical reaction systems can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant novel insights while remaining a fully testable and executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modularizations that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. Remarkably, we found that overall interconnectedness depends on the magnitude of inputs, with higher connectivity at low input concentrations and significant modularization at moderate to high input concentrations. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation.

  17. Supervised spike-timing-dependent plasticity: a spatiotemporal neuronal learning rule for function approximation and decisions.

    PubMed

    Franosch, Jan-Moritz P; Urban, Sebastian; van Hemmen, J Leo

    2013-12-01

    How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as "supervisor." Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.

  18. Deep neural mapping support vector machines.

    PubMed

    Li, Yujian; Zhang, Ting

    2017-09-01

    The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  20. Factorizing the motion sensitivity function into equivalent input noise and calculation efficiency.

    PubMed

    Allard, Rémy; Arleo, Angelo

    2017-01-01

    The photopic motion sensitivity function of the energy-based motion system is band-pass peaking around 8 Hz. Using an external noise paradigm to factorize the sensitivity into equivalent input noise and calculation efficiency, the present study investigated if the variation in photopic motion sensitivity as a function of the temporal frequency is due to a variation of equivalent input noise (e.g., early temporal filtering) or calculation efficiency (ability to select and integrate motion). For various temporal frequencies, contrast thresholds for a direction discrimination task were measured in presence and absence of noise. Up to 15 Hz, the sensitivity variation was mainly due to a variation of equivalent input noise and little variation in calculation efficiency was observed. The sensitivity fall-off at very high temporal frequencies (from 15 to 30 Hz) was due to a combination of a drop of calculation efficiency and a rise of equivalent input noise. A control experiment in which an artificial temporal integration was applied to the stimulus showed that an early temporal filter (generally assumed to affect equivalent input noise, not calculation efficiency) could impair both the calculation efficiency and equivalent input noise at very high temporal frequencies. We conclude that at the photopic luminance intensity tested, the variation of motion sensitivity as a function of the temporal frequency was mainly due to early temporal filtering, not to the ability to select and integrate motion. More specifically, we conclude that photopic motion sensitivity at high temporal frequencies is limited by internal noise occurring after the transduction process (i.e., neural noise), not by quantal noise resulting from the probabilistic absorption of photons by the photoreceptors as previously suggested.

  1. Advanced information processing system: Local system services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter

    1989-01-01

    The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.

  2. Canonical multi-valued input Reed-Muller trees and forms

    NASA Technical Reports Server (NTRS)

    Perkowski, M. A.; Johnson, P. D.

    1991-01-01

    There is recently an increased interest in logic synthesis using EXOR gates. The paper introduces the fundamental concept of Orthogonal Expansion, which generalizes the ring form of the Shannon expansion to the logic with multiple-valued (mv) inputs. Based on this concept we are able to define a family of canonical tree circuits. Such circuits can be considered for binary and multiple-valued input cases. They can be multi-level (trees and DAG's) or flattened to two-level AND-EXOR circuits. Input decoders similar to those used in Sum of Products (SOP) PLA's are used in realizations of multiple-valued input functions. In the case of the binary logic the family of flattened AND-EXOR circuits includes several forms discussed by Davio and Green. For the case of the logic with multiple-valued inputs, the family of the flattened mv AND-EXOR circuits includes three expansions known from literature and two new expansions.

  3. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  4. A FORTRAN program for the analysis of linear continuous and sample-data systems

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.

    1976-01-01

    A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.

  5. Analysis and synthesis of abstract data types through generalization from examples

    NASA Technical Reports Server (NTRS)

    Wild, Christian

    1987-01-01

    The discovery of general patterns of behavior from a set of input/output examples can be a useful technique in the automated analysis and synthesis of software systems. These generalized descriptions of the behavior form a set of assertions which can be used for validation, program synthesis, program testing, and run-time monitoring. Describing the behavior is characterized as a learning process in which the set of inputs is mapped into an appropriate transform space such that general patterns can be easily characterized. The learning algorithm must chose a transform function and define a subset of the transform space which is related to equivalence classes of behavior in the original domain. An algorithm for analyzing the behavior of abstract data types is presented and several examples are given. The use of the analysis for purposes of program synthesis is also discussed.

  6. ANL/RBC: A computer code for the analysis of Rankine bottoming cycles, including system cost evaluation and off-design performance

    NASA Technical Reports Server (NTRS)

    Mclennan, G. A.

    1986-01-01

    This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.

  7. Optimal discrete-time LQR problems for parabolic systems with unbounded input: Approximation and convergence

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1988-01-01

    An abstract approximation and convergence theory for the closed-loop solution of discrete-time linear-quadratic regulator problems for parabolic systems with unbounded input is developed. Under relatively mild stabilizability and detectability assumptions, functional analytic, operator techniques are used to demonstrate the norm convergence of Galerkin-based approximations to the optimal feedback control gains. The application of the general theory to a class of abstract boundary control systems is considered. Two examples, one involving the Neumann boundary control of a one-dimensional heat equation, and the other, the vibration control of a cantilevered viscoelastic beam via shear input at the free end, are discussed.

  8. An exact algebraic solution of the infimum in H-infinity optimization with output feedback

    NASA Technical Reports Server (NTRS)

    Chen, Ben M.; Saberi, Ali; Ly, Uy-Loi

    1991-01-01

    This paper presents a simple and noniterative procedure for the computation of the exact value of the infimum in the standard H-infinity-optimal control with output feedback. The problem formulation is general and does not place any restrictions on the direct feedthrough terms between the control input and the controlled output variables, and between the disturbance input and the measurement output variables. The method is applicable to systems that satisfy (1) the transfer function from the control input to the controlled output is right-invertible and has no invariant zeros on the j(w) axis and, (2) the transfer function from the disturbance to the measurement output is left-invertible and has no invariant zeros on the j(w) axis. A set of necessary and sufficient conditions for the solvability of H-infinity-almost disturbance decoupling problem via measurement feedback with internal stability is also given.

  9. Adaptive Neural Control of Uncertain MIMO Nonlinear Systems With State and Input Constraints.

    PubMed

    Chen, Ziting; Li, Zhijun; Chen, C L Philip

    2017-06-01

    An adaptive neural control strategy for multiple input multiple output nonlinear systems with various constraints is presented in this paper. To deal with the nonsymmetric input nonlinearity and the constrained states, the proposed adaptive neural control is combined with the backstepping method, radial basis function neural network, barrier Lyapunov function (BLF), and disturbance observer. By ensuring the boundedness of the BLF of the closed-loop system, it is demonstrated that the output tracking is achieved with all states remaining in the constraint sets and the general assumption on nonsingularity of unknown control coefficient matrices has been eliminated. The constructed adaptive neural control has been rigorously proved that it can guarantee the semiglobally uniformly ultimate boundedness of all signals in the closed-loop system. Finally, the simulation studies on a 2-DOF robotic manipulator system indicate that the designed adaptive control is effective.

  10. Refining Linear Fuzzy Rules by Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil

    1996-01-01

    Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.

  11. The Construct of Attention in Schizophrenia

    PubMed Central

    Luck, Steven J.; Gold, James M.

    2008-01-01

    Schizophrenia is widely thought to involve deficits of attention. However, the term attention can be defined so broadly that impaired performance on virtually any task could be construed as evidence for a deficit in attention, and this has slowed cumulative progress in understanding attention deficits in schizophrenia. To address this problem, we divide the general concept of attention into two distinct constructs: input selection, the selection of task-relevant inputs for further processing; and rule selection, the selective activation of task-appropriate rules. These constructs are closely tied to working memory, because input selection mechanisms are used to control the transfer of information into working memory and because working memory stores the rules used by rule selection mechanisms. These constructs are also closely tied to executive function, because executive systems are used to guide input selection and because rule selection is itself at key aspect of executive function. Within the domain of input selection, it is important to distinguish between the control of selection—the processes that guide attention to task-relevant inputs—and the implementation of selection—the processes that enhance the processing of the relevant inputs and suppress the irrelevant inputs. Current evidence suggests that schizophrenia involves a significant impairment in the control of selection but little or no impairment in the implementation of selection. Consequently, the CNTRICS participants agreed by consensus that attentional control should be a priority target for measurement and treatment research in schizophrenia. PMID:18374901

  12. Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.

    PubMed

    Paulk, Angelique C; Gronenberg, Wulfila

    2008-11-01

    To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.

  13. Bessel Weighted Asymmetries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avakian, Harut; Gamberg, Leonard; Rossi, Patrizia

    We review the concept of Bessel weighted asymmetries for semi-inclusive deep inelastic scattering and focus on the cross section in Fourier space, conjugate to the outgoing hadron’s transverse momentum, where convolutions of transverse momentum dependent parton distribution functions and fragmentation functions become simple products. Individual asymmetric terms in the cross section can be projected out by means of a generalized set of weights involving Bessel functions. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized partonmore » model. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy and hard scale Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  14. Smooth function approximation using neural networks.

    PubMed

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  15. A general framework for numerical simulation of improvised explosive device (IED)-detection scenarios using density functional theory (DFT) and terahertz (THz) spectra.

    PubMed

    Shabaev, Andrew; Lambrakos, Samuel G; Bernstein, Noam; Jacobs, Verne L; Finkenstadt, Daniel

    2011-04-01

    We have developed a general framework for numerical simulation of various types of scenarios that can occur for the detection of improvised explosive devices (IEDs) through the use of excitation using incident electromagnetic waves. A central component model of this framework is an S-matrix representation of a multilayered composite material system. Each layer of the system is characterized by an average thickness and an effective electric permittivity function. The outputs of this component are the reflectivity and the transmissivity as functions of frequency and angle of the incident electromagnetic wave. The input of the component is a parameterized analytic-function representation of the electric permittivity as a function of frequency, which is provided by another component model of the framework. The permittivity function is constructed by fitting response spectra calculated using density functional theory (DFT) and parameter adjustment according to any additional information that may be available, e.g., experimentally measured spectra or theory-based assumptions concerning spectral features. A prototype simulation is described that considers response characteristics for THz excitation of the high explosive β-HMX. This prototype simulation includes a description of a procedure for calculating response spectra using DFT as input to the Smatrix model. For this purpose, the DFT software NRLMOL was adopted. © 2011 Society for Applied Spectroscopy

  16. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  17. Generalized local emission tomography

    DOEpatents

    Katsevich, Alexander J.

    1998-01-01

    Emission tomography enables locations and values of internal isotope density distributions to be determined from radiation emitted from the whole object. In the method for locating the values of discontinuities, the intensities of radiation emitted from either the whole object or a region of the object containing the discontinuities are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the isotope density discontinuity. The asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) knowing pointwise values of the attenuation coefficient within the object. In the method for determining the location of the discontinuity, the intensities of radiation emitted from an object are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the density discontinuity and the location .GAMMA. of the attenuation coefficient discontinuity. Pointwise values of the attenuation coefficient within the object need not be known in this case.

  18. Additivity of nonsimultaneous masking for short Gaussian-shaped sinusoids.

    PubMed

    Laback, Bernhard; Balazs, Peter; Necciari, Thibaud; Savel, Sophie; Ystad, Solvi; Meunier, Sabine; Kronland-Martinet, Richard

    2011-02-01

    The additivity of nonsimultaneous masking was studied using Gaussian-shaped tone pulses (referred to as Gaussians) as masker and target stimuli. Combinations of up to four temporally separated Gaussian maskers with an equivalent rectangular bandwidth of 600 Hz and an equivalent rectangular duration of 1.7 ms were tested. Each masker was level-adjusted to produce approximately 8 dB of masking. Excess masking (exceeding linear additivity) was generally stronger than reported in the literature for longer maskers and comparable target levels. A model incorporating a compressive input/output function, followed by a linear summation stage, underestimated excess masking when using an input/output function derived from literature data for longer maskers and comparable target levels. The data could be predicted with a more compressive input/output function. Stronger compression may be explained by assuming that the Gaussian stimuli were too short to evoke the medial olivocochlear reflex (MOCR), whereas for longer maskers tested previously the MOCR caused reduced compression. Overall, the interpretation of the data suggests strong basilar membrane compression for very short stimuli.

  19. Motor Control of Human Spinal Cord Disconnected from the Brain and Under External Movement.

    PubMed

    Mayr, Winfried; Krenn, Matthias; Dimitrijevic, Milan R

    2016-01-01

    Motor control after spinal cord injury is strongly depending on residual ascending and descending pathways across the lesion. The individually altered neurophysiology is in general based on still intact sublesional control loops with afferent sensory inputs linked via interneuron networks to efferent motor outputs. Partial or total loss of translesional control inputs reduces and alters the ability to perform voluntary movements and results in motor incomplete (residual voluntary control of movement functions) or motor complete (no residual voluntary control) spinal cord injury classification. Of particular importance are intact functionally silent neural structures with residual brain influence but reduced state of excitability that inhibits execution of voluntary movements. The condition is described by the term discomplete spinal cord injury. There are strong evidences that artificial afferent input, e.g., by epidural or noninvasive electrical stimulation of the lumbar posterior roots, can elevate the state of excitability and thus re-enable or augment voluntary movement functions. This modality can serve as a powerful assessment technique for monitoring details of the residual function profile after spinal cord injury, as a therapeutic tool for support of restoration of movement programs and as a neuroprosthesis component augmenting and restoring movement functions, per se or in synergy with classical neuromuscular or muscular electrical stimulation.

  20. Description of the IV + V System Software Package.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management: An International Journal for Library and Information Services, 1984

    1984-01-01

    Describes the IV + V System, a software package designed by the Institut fur Maschinelle Dokumentation for the United Nations General Information Programme and UNISIST to support automation of local information and documentation services. Principle program features and functions outlined include input/output, databank, text image, output, and…

  1. Interactive Spectral Analysis and Computation (ISAAC)

    NASA Technical Reports Server (NTRS)

    Lytle, D. M.

    1992-01-01

    Isaac is a task in the NSO external package for IRAF. A descendant of a FORTRAN program written to analyze data from a Fourier transform spectrometer, the current implementation has been generalized sufficiently to make it useful for general spectral analysis and other one dimensional data analysis tasks. The user interface for Isaac is implemented as an interpreted mini-language containing a powerful, programmable vector calculator. Built-in commands provide much of the functionality needed to produce accurate line lists from input spectra. These built-in functions include automated spectral line finding, least squares fitting of Voigt profiles to spectral lines including equality constraints, various filters including an optimal filter construction tool, continuum fitting, and various I/O functions.

  2. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  3. Computer Aided Synthesis or Measurement Schemes for Telemetry applications

    DTIC Science & Technology

    1997-09-02

    5.2.5. Frame structure generation The algorithm generating the frame structure should take as inputs the sampling frequency requirements of the channels...these channels into the frame structure. Generally there can be a lot of ways to divide channels among groups. The algorithm implemented in...groups) first. The algorithm uses the function "try_permutation" recursively to distribute channels among the groups, and the function "try_subtable

  4. Development of the Complex General Linear Model in the Fourier Domain: Application to fMRI Multiple Input-Output Evoked Responses for Single Subjects

    PubMed Central

    Rio, Daniel E.; Rawlings, Robert R.; Woltz, Lawrence A.; Gilman, Jodi; Hommer, Daniel W.

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function. PMID:23840281

  5. Development of the complex general linear model in the Fourier domain: application to fMRI multiple input-output evoked responses for single subjects.

    PubMed

    Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.

  6. Surprise! Infants consider possible bases of generalization for a single input example.

    PubMed

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization might occur when even a single input example is surprising, given the learner's current model of the domain. To test the possibility that infants are able to generalize based on a single example, we familiarized 9-month-olds with a single three-syllable input example that contained either one surprising feature (syllable repetition, Experiment 1) or two features (repetition and a rare syllable, Experiment 2). In both experiments, infants generalized only to new strings that maintained all of the surprising features from familiarization. This research suggests that surprise can promote very rapid generalization. © 2014 John Wiley & Sons Ltd.

  7. 32 CFR 635.16 - General.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the COPS MPRS and a systems administrator to ensure that the system is properly functioning. Reporting... System (DIBRS). The Army inputs its data into DIBRS utilizing COPS. Any data reported to DIBRS is only as good as the data reported into COPS, so the need for accuracy in reporting incidents and utilizing...

  8. 32 CFR 635.16 - General.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the COPS MPRS and a systems administrator to ensure that the system is properly functioning. Reporting... System (DIBRS). The Army inputs its data into DIBRS utilizing COPS. Any data reported to DIBRS is only as good as the data reported into COPS, so the need for accuracy in reporting incidents and utilizing...

  9. 32 CFR 635.16 - General.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the COPS MPRS and a systems administrator to ensure that the system is properly functioning. Reporting... System (DIBRS). The Army inputs its data into DIBRS utilizing COPS. Any data reported to DIBRS is only as good as the data reported into COPS, so the need for accuracy in reporting incidents and utilizing...

  10. 32 CFR 635.16 - General.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the COPS MPRS and a systems administrator to ensure that the system is properly functioning. Reporting... System (DIBRS). The Army inputs its data into DIBRS utilizing COPS. Any data reported to DIBRS is only as good as the data reported into COPS, so the need for accuracy in reporting incidents and utilizing...

  11. Studies of transverse momentum dependent parton distributions and Bessel weighting

    DOE PAGES

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; ...

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  12. Studies of transverse momentum dependent parton distributions and Bessel weighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghasyan, M.; Avakian, H.; De Sanctis, E.

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  13. Training feed-forward neural networks with gain constraints

    PubMed

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  14. Surprise! Infants Consider Possible Bases of Generalization for a Single Input Example

    ERIC Educational Resources Information Center

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization…

  15. Mechanism of Resilin Elasticity

    PubMed Central

    Qin, Guokui; Hu, Xiao; Cebe, Peggy; Kaplan, David L.

    2012-01-01

    Resilin is critical in the flight and jumping systems of insects as a polymeric rubber-like protein with outstanding elasticity. However, insight into the underlying molecular mechanisms responsible for resilin elasticity remains undefined. Here we report the structure and function of resilin from Drosophila CG15920. A reversible beta-turn transition was identified in the peptide encoded by exon III and for full length resilin during energy input and release, features that correlate to the rapid deformation of resilin during functions in vivo. Micellar structures and nano-porous patterns formed after beta-turn structures were present via changes in either the thermal or mechanical inputs. A model is proposed to explain the super elasticity and energy conversion mechanisms of resilin, providing important insight into structure-function relationships for this protein. Further, this model offers a view of elastomeric proteins in general where beta-turn related structures serve as fundamental units of the structure and elasticity. PMID:22893127

  16. Permutational symmetries for coincidence rates in multimode multiphotonic interferometry

    NASA Astrophysics Data System (ADS)

    Khalid, Abdullah; Spivak, Dylan; Sanders, Barry C.; de Guise, Hubert

    2018-06-01

    We obtain coincidence rates for passive optical interferometry by exploiting the permutational symmetries of partially distinguishable input photons, and our approach elucidates qualitative features of multiphoton coincidence landscapes. We treat the interferometer input as a product state of any number of photons in each input mode with photons distinguished by their arrival time. Detectors at the output of the interferometer count photons from each output mode over a long integration time. We generalize and prove the claim of Tillmann et al. [Phys. Rev. X 5, 041015 (2015), 10.1103/PhysRevX.5.041015] that coincidence rates can be elegantly expressed in terms of immanants. Immanants are functions of matrices that exhibit permutational symmetries and the immanants appearing in our coincidence-rate expressions share permutational symmetries with the input state. Our results are obtained by employing representation theory of the symmetric group to analyze systems of an arbitrary number of photons in arbitrarily sized interferometers.

  17. A neural circuit mechanism for regulating vocal variability during song learning in zebra finches.

    PubMed

    Garst-Orozco, Jonathan; Babadi, Baktash; Ölveczky, Bence P

    2014-12-15

    Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.

  18. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  19. The Post-Processing Approach in the Finite Element Method. Part 1. Calculation of Displacements, Stresses, and other Higher Derivatives of the Displacements.

    DTIC Science & Technology

    1982-12-01

    Were the influence function (Green’s function) known for this point, then we could take i=O and 0 would be expressible in terms of the input data...alone. So (1.1) would take the form 4=R . Of course, the influence function is not in general available. At the other extreme, if we take to be the Dirac...where n is some integer, which, for the moment, will remain arbitrary. If we select for the influence function (Green’s function), then (2.5a) and

  20. Chinchilla middle ear transmission matrix model and middle-ear flexibilitya)

    PubMed Central

    Ravicz, Michael E.; Rosowski, John J.

    2017-01-01

    The function of the middle ear (ME) in transforming ME acoustic inputs and outputs (sound pressures and volume velocities) can be described with an acoustic two-port transmission matrix. This description is independent of the load on the ME (cochlea or ear canal) and holds in either direction: forward (from ear canal to cochlea) or reverse (from cochlea to ear canal). A transmission matrix describing ME function in chinchilla, an animal commonly used in auditory research, is presented, computed from measurements of forward ME function: input admittance YTM, ME pressure gain GMEP, ME velocity transfer function HV, and cochlear input admittance YC, in the same set of ears [Ravicz and Rosowski (2012b). J. Acoust. Soc. Am. 132, 2437–2454; (2013a). J. Acoust. Soc. Am. 133, 2208–2223; (2013b). J. Acoust. Soc. Am. 134, 2852–2865]. Unlike previous estimates, these computations require no assumptions about the state of the inner ear, effectiveness of ME manipulations, or measurements of sound transmission in the reverse direction. These element values are generally consistent with physical constraints and the anatomical ME “transformer ratio.” Differences from a previous estimate in chinchilla [Songer and Rosowski (2007). J. Acoust. Soc. Am. 122, 932–942] may be due to a difference in ME flexibility between the two subject groups. PMID:28599566

  1. Chinchilla middle ear transmission matrix model and middle-ear flexibility.

    PubMed

    Ravicz, Michael E; Rosowski, John J

    2017-05-01

    The function of the middle ear (ME) in transforming ME acoustic inputs and outputs (sound pressures and volume velocities) can be described with an acoustic two-port transmission matrix. This description is independent of the load on the ME (cochlea or ear canal) and holds in either direction: forward (from ear canal to cochlea) or reverse (from cochlea to ear canal). A transmission matrix describing ME function in chinchilla, an animal commonly used in auditory research, is presented, computed from measurements of forward ME function: input admittance Y TM , ME pressure gain G MEP , ME velocity transfer function H V , and cochlear input admittance Y C , in the same set of ears [Ravicz and Rosowski (2012b). J. Acoust. Soc. Am. 132, 2437-2454; (2013a). J. Acoust. Soc. Am. 133, 2208-2223; (2013b). J. Acoust. Soc. Am. 134, 2852-2865]. Unlike previous estimates, these computations require no assumptions about the state of the inner ear, effectiveness of ME manipulations, or measurements of sound transmission in the reverse direction. These element values are generally consistent with physical constraints and the anatomical ME "transformer ratio." Differences from a previous estimate in chinchilla [Songer and Rosowski (2007). J. Acoust. Soc. Am. 122, 932-942] may be due to a difference in ME flexibility between the two subject groups.

  2. Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting

    NASA Astrophysics Data System (ADS)

    Gamberg, Leonard

    2015-04-01

    We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.

  3. Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting

    NASA Astrophysics Data System (ADS)

    Gamberg, Leonard

    2015-10-01

    We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.

  4. Chromatographic peak resolution using Microsoft Excel Solver. The merit of time shifting input arrays.

    PubMed

    Dasgupta, Purnendu K

    2008-12-05

    Resolution of overlapped chromatographic peaks is generally accomplished by modeling the peaks as Gaussian or modified Gaussian functions. It is possible, even preferable, to use actual single analyte input responses for this purpose and a nonlinear least squares minimization routine such as that provided by Microsoft Excel Solver can then provide the resolution. In practice, the quality of the results obtained varies greatly due to small shifts in retention time. I show here that such deconvolution can be considerably improved if one or more of the response arrays are iteratively shifted in time.

  5. A space transportation system operations model

    NASA Technical Reports Server (NTRS)

    Morris, W. Douglas; White, Nancy H.

    1987-01-01

    Presented is a description of a computer program which permits assessment of the operational support requirements of space transportation systems functioning in both a ground- and space-based environment. The scenario depicted provides for the delivery of payloads from Earth to a space station and beyond using upper stages based at the station. Model results are scenario dependent and rely on the input definitions of delivery requirements, task times, and available resources. Output is in terms of flight rate capabilities, resource requirements, and facility utilization. A general program description, program listing, input requirements, and sample output are included.

  6. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  7. ASIC For Complex Fixed-Point Arithmetic

    NASA Technical Reports Server (NTRS)

    Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.

    1995-01-01

    Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.

  8. Bilinearity in Spatiotemporal Integration of Synaptic Inputs

    PubMed Central

    Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David

    2014-01-01

    Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832

  9. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    PubMed

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  10. Quality factor concept in piezoceramic transformer performance description.

    PubMed

    Mezheritsky, Alex V

    2006-02-01

    A new general approach based on the quality factor concept to piezoceramic transformer (PT) performance description is proposed. The system's quality factor, material elastic anisotropy, and coupling factors of the input and output sections of an electrically excited and electrically loaded PT fully characterize its resonance and near-resonance behavior. The PT efficiency, transformation ratio, and input and output power were analytically analyzed and simulated as functions of the load and frequency for the simplest classical Langevin-type and Rosen-type PT designs. A new formulation of the electrical input impedance allows one to separate the power consumed by PT from the power transferred into the load. The system's PT quality factor takes into account losses in each PT "input-output-load" functional components. The loading process is changing PT input electrical impedance on the way that under loading the minimum series impedance is increasing and the maximum parallel impedance is decreasing coincidentally. The quality-factors ratio, between the states of fully loaded and nonloaded PT, is one of the best measures of PTs dynamic performance--practically, the lower the ratio is, the better PT efficiency. A simple and effective method for the loaded PT quality factor determination is proposed. As was found, a piezoceramic with low piezoelectric anisotropy is required to provide maximum PT efficiency and higher corresponding voltage gain. Limitations on the PT output voltage and power, caused by nonlinear effects in piezoceramics, were established.

  11. Systems and methods for reconfiguring input devices

    NASA Technical Reports Server (NTRS)

    Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)

    2012-01-01

    A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.

  12. Two generalizations of Kohonen clustering

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  13. Polarity-specific high-level information propagation in neural networks.

    PubMed

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  14. Polarity-specific high-level information propagation in neural networks

    PubMed Central

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals. PMID:24672472

  15. Module theoretic zero structures for system matrices

    NASA Technical Reports Server (NTRS)

    Wyman, Bostwick F.; Sain, Michael K.

    1987-01-01

    The coordinate-free module-theoretic treatment of transmission zeros for MIMO transfer functions developed by Wyman and Sain (1981) is generalized to include noncontrollable and nonobservable linear dynamical systems. Rational, finitely-generated-modular, and torsion-divisible interpretations of the Rosenbrock system matrix are presented; Gamma-zero and Omega-zero modules are defined and shown to contain the output-decoupling and input-decoupling zero modules, respectively, as submodules; and the cases of left and right invertible transfer functions are considered.

  16. Fast online generalized multiscale finite element method using constraint energy minimization

    NASA Astrophysics Data System (ADS)

    Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat

    2018-02-01

    Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.

  17. State-space estimation of the input stimulus function using the Kalman filter: a communication system model for fMRI experiments.

    PubMed

    Ward, B Douglas; Mazaheri, Yousef

    2006-12-15

    The blood oxygenation level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments in response to input stimuli is temporally delayed and distorted due to the blurring effect of the voxel hemodynamic impulse response function (IRF). Knowledge of the IRF, obtained during the same experiment, or as the result of a separate experiment, can be used to dynamically obtain an estimate of the input stimulus function. Reconstruction of the input stimulus function allows the fMRI experiment to be evaluated as a communication system. The input stimulus function may be considered as a "message" which is being transmitted over a noisy "channel", where the "channel" is characterized by the voxel IRF. Following reconstruction of the input stimulus function, the received message is compared with the transmitted message on a voxel-by-voxel basis to determine the transmission error rate. Reconstruction of the input stimulus function provides insight into actual brain activity during task activation with less temporal blurring, and may be considered as a first step toward estimation of the true neuronal input function.

  18. SFG synthesis of general high-order all-pass and all-pole current transfer functions using CFTAs.

    PubMed

    Tangsrirat, Worapong

    2014-01-01

    An approach of using the signal flow graph (SFG) technique to synthesize general high-order all-pass and all-pole current transfer functions with current follower transconductance amplifiers (CFTAs) and grounded capacitors has been presented. For general nth-order systems, the realized all-pass structure contains at most n + 1 CFTAs and n grounded capacitors, while the all-pole lowpass circuit requires only n CFTAs and n grounded capacitors. The resulting circuits obtained from the synthesis procedure are resistor-less structures and especially suitable for integration. They also exhibit low-input and high-output impedances and also convenient electronic controllability through the g m-value of the CFTA. Simulation results using real transistor model parameters ALA400 are also included to confirm the theory.

  19. SFG Synthesis of General High-Order All-Pass and All-Pole Current Transfer Functions Using CFTAs

    PubMed Central

    Tangsrirat, Worapong

    2014-01-01

    An approach of using the signal flow graph (SFG) technique to synthesize general high-order all-pass and all-pole current transfer functions with current follower transconductance amplifiers (CFTAs) and grounded capacitors has been presented. For general nth-order systems, the realized all-pass structure contains at most n + 1 CFTAs and n grounded capacitors, while the all-pole lowpass circuit requires only n CFTAs and n grounded capacitors. The resulting circuits obtained from the synthesis procedure are resistor-less structures and especially suitable for integration. They also exhibit low-input and high-output impedances and also convenient electronic controllability through the g m-value of the CFTA. Simulation results using real transistor model parameters ALA400 are also included to confirm the theory. PMID:24688375

  20. A third-order class-D amplifier with and without ripple compensation

    NASA Astrophysics Data System (ADS)

    Cox, Stephen M.; du Toit Mouton, H.

    2018-06-01

    We analyse the nonlinear behaviour of a third-order class-D amplifier, and demonstrate the remarkable effectiveness of the recently introduced ripple compensation (RC) technique in reducing the audio distortion of the device. The amplifier converts an input audio signal to a high-frequency train of rectangular pulses, whose widths are modulated according to the input signal (pulse-width modulation) and employs negative feedback. After determining the steady-state operating point for constant input and calculating its stability, we derive a small-signal model (SSM), which yields in closed form the transfer function relating (infinitesimal) input and output disturbances. This SSM shows how the RC technique is able to linearise the small-signal response of the device. We extend this SSM through a fully nonlinear perturbation calculation of the dynamics of the amplifier, based on the disparity in time scales between the pulse train and the audio signal. We obtain the nonlinear response of the amplifier to a general audio signal, avoiding the linearisation inherent in the SSM; we thereby more precisely quantify the reduction in distortion achieved through RC. Finally, simulations corroborate our theoretical predictions and illustrate the dramatic deterioration in performance that occurs when the amplifier is operated in an unstable regime. The perturbation calculation is rather general, and may be adapted to quantify the way in which other nonlinear negative-feedback pulse-modulated devices track a time-varying input signal that slowly modulates the system parameters.

  1. Nonlinear Transfer of Signal and Noise Correlations in Cortical Networks

    PubMed Central

    Lyamzin, Dmitry R.; Barnes, Samuel J.; Donato, Roberta; Garcia-Lazaro, Jose A.; Keck, Tara

    2015-01-01

    Signal and noise correlations, a prominent feature of cortical activity, reflect the structure and function of networks during sensory processing. However, in addition to reflecting network properties, correlations are also shaped by intrinsic neuronal mechanisms. Here we show that spike threshold transforms correlations by creating nonlinear interactions between signal and noise inputs; even when input noise correlation is constant, spiking noise correlation varies with both the strength and correlation of signal inputs. We characterize these effects systematically in vitro in mice and demonstrate their impact on sensory processing in vivo in gerbils. We also find that the effects of nonlinear correlation transfer on cortical responses are stronger in the synchronized state than in the desynchronized state, and show that they can be reproduced and understood in a model with a simple threshold nonlinearity. Since these effects arise from an intrinsic neuronal property, they are likely to be present across sensory systems and, thus, our results are a critical step toward a general understanding of how correlated spiking relates to the structure and function of cortical networks. PMID:26019325

  2. New York State Educational Information System (NYSEIS) Systems Design. Volume I, Phase II. Final Report.

    ERIC Educational Resources Information Center

    Price Waterhouse and Co., New York, NY.

    This volume on Phase II of the New York State Educational Information System (NYSEIS) describes the Gross Systems Analysis and Design, which includes the general flow diagram and processing chart for each of the student, personnel, and financial subsystems. Volume II, Functional Specifications, includes input/output requirements and file…

  3. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resultingmore » in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.« less

  4. Slow feature analysis: unsupervised learning of invariances.

    PubMed

    Wiskott, Laurenz; Sejnowski, Terrence J

    2002-04-01

    Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

  5. Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆

    PubMed Central

    Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-01-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493

  6. Carbon and nitrogen inputs affect soil microbial community structure and function

    NASA Astrophysics Data System (ADS)

    Liu, X. J. A.; Mau, R. L.; Hayer, M.; Finley, B. K.; Schwartz, E.; Dijkstra, P.; Hungate, B. A.

    2016-12-01

    Climate change has been projected to increase energy and nutrient inputs to soils, affecting soil organic matter (SOM) decomposition (priming effect) and microbial communities. However, many important questions remain: how do labile C and/or N inputs affect priming and microbial communities? What is the relationship between them? To address these questions, we applied N (NH4NO3 ; 100 µg N g-1 wk-1), C (13C glucose; 1000 µg C g-1 wk-1), C+N to four different soils for five weeks. We found: 1) N showed no effect, whereas C induced the greatest priming, and C+N had significantly lower priming than C. 2) C and C+N additions increased the relative abundance of actinobacteria, proteobacteria, and firmicutes, but reduced relative abundance of acidobacteria, chloroflexi, verrucomicrobia, planctomycetes, and gemmatimonadetes. 3) Actinobacteria and proteobacteria increased relative abundance over time, but most others decreased over time. 4) substrate additions (N, C, C+N) significantly reduced microbial alpha diversity, which also decreased over time. 5) For beta diversity, C and C+N formed significantly different communities compare to the control and N treatments. Overtime, microbial community structure significantly altered. Four soils have drastically different community structures. These results indicate amounts of substrate C were determinant factors in modulating the rate of SOM decomposition and microbial communities. Variable responses of different microbial communities to labile C and N inputs indicate that complex relationships between priming and microbial functions. In general, we demonstrate that energy inputs can quickly accelerate SOM decomposition whereas extra N input can slow this process, though both had similar microbial community responses.

  7. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    PubMed

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  9. NERVA dynamic analysis methodology, SPRVIB

    NASA Technical Reports Server (NTRS)

    Vronay, D. F.

    1972-01-01

    The general dynamic computer code called SPRVIB (Spring Vib) developed in support of the NERVA (nuclear engine for rocket vehicle application) program is described. Using normal mode techniques, the program computes kinematical responses of a structure caused by various combinations of harmonic and elliptic forcing functions or base excitations. Provision is made for a graphical type of force or base excitation input to the structure. A description of the required input format and a listing of the program are presented, along with several examples illustrating the use of the program. SPRVIB is written in FORTRAN 4 computer language for use on the CDC 6600 or the IBM 360/75 computers.

  10. Twisted quantum double model of topological order with boundaries

    NASA Astrophysics Data System (ADS)

    Bullivant, Alex; Hu, Yuting; Wan, Yidun

    2017-10-01

    We generalize the twisted quantum double model of topological orders in two dimensions to the case with boundaries by systematically constructing the boundary Hamiltonians. Given the bulk Hamiltonian defined by a gauge group G and a 3-cocycle in the third cohomology group of G over U (1 ) , a boundary Hamiltonian can be defined by a subgroup K of G and a 2-cochain in the second cochain group of K over U (1 ) . The consistency between the bulk and boundary Hamiltonians is dictated by what we call the Frobenius condition that constrains the 2-cochain given the 3-cocyle. We offer a closed-form formula computing the ground-state degeneracy of the model on a cylinder in terms of the input data only, which can be naturally generalized to surfaces with more boundaries. We also explicitly write down the ground-state wave function of the model on a disk also in terms of the input data only.

  11. Comparison of SOM point densities based on different criteria.

    PubMed

    Kohonen, T

    1999-11-15

    Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.

  12. Improved Neural Networks with Random Weights for Short-Term Load Forecasting

    PubMed Central

    Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo

    2015-01-01

    An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting. PMID:26629825

  13. Improved Neural Networks with Random Weights for Short-Term Load Forecasting.

    PubMed

    Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo

    2015-01-01

    An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting.

  14. A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields

    NASA Astrophysics Data System (ADS)

    Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang

    2017-03-01

    Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.

  15. ANTONIA perfusion and stroke. A software tool for the multi-purpose analysis of MR perfusion-weighted datasets and quantitative ischemic stroke assessment.

    PubMed

    Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J

    2014-01-01

    The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.

  16. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  17. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  18. Perceptual Decoding Processes for Language in a Visual Mode and for Language in an Auditory Mode.

    ERIC Educational Resources Information Center

    Myerson, Rosemarie Farkas

    The purpose of this paper is to gain insight into the nature of the reading process through an understanding of the general nature of sensory processing mechanisms which reorganize and restructure input signals for central recognition, and an understanding of how the grammar of the language functions in defining the set of possible sentences in…

  19. Master control data handling program uses automatic data input

    NASA Technical Reports Server (NTRS)

    Alliston, W.; Daniel, J.

    1967-01-01

    General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.

  20. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  1. Model reduction of nonsquare linear MIMO systems using multipoint matrix continued-fraction expansions

    NASA Technical Reports Server (NTRS)

    Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San

    1994-01-01

    This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.

  2. Domain-General Brain Regions Do Not Track Linguistic Input as Closely as Language-Selective Regions.

    PubMed

    Blank, Idan A; Fedorenko, Evelina

    2017-10-11

    Language comprehension engages a cortical network of left frontal and temporal regions. Activity in this network is language-selective, showing virtually no modulation by nonlinguistic tasks. In addition, language comprehension engages a second network consisting of bilateral frontal, parietal, cingulate, and insular regions. Activity in this "multiple demand" (MD) network scales with comprehension difficulty, but also with cognitive effort across a wide range of nonlinguistic tasks in a domain-general fashion. Given the functional dissociation between the language and MD networks, their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Prior neuroimaging studies have suggested that activity in each network covaries with some linguistic features that, behaviorally, influence on-line processing and comprehension. This sensitivity of the language and MD networks to local input characteristics has often been interpreted, implicitly or explicitly, as evidence that both networks track linguistic input closely, and in a manner consistent across individuals. Here, we used fMRI to directly test this assumption by comparing the BOLD signal time courses in each network across different people ( n = 45, men and women) listening to the same story. Language network activity showed fewer individual differences, indicative of closer input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD networks. SIGNIFICANCE STATEMENT Language comprehension recruits both language-specific mechanisms and domain-general mechanisms that are engaged in many cognitive processes. In the human cortex, language-selective mechanisms are implemented in the left-lateralized "core language network", whereas domain-general mechanisms are implemented in the bilateral "multiple demand" (MD) network. Here, we report the first direct comparison of the respective contributions of these networks to naturalistic story comprehension. Using a novel combination of neuroimaging approaches we find that MD regions track stories less closely than language regions. This finding constrains the possible contributions of the MD network to comprehension, contrasts with accounts positing that this network has continuous access to linguistic input, and suggests a new typology of comprehension processes based on their extent of input tracking. Copyright © 2017 the authors 0270-6474/17/3710000-13$15.00/0.

  3. Domain-General Brain Regions Do Not Track Linguistic Input as Closely as Language-Selective Regions

    PubMed Central

    Fedorenko, Evelina

    2017-01-01

    Language comprehension engages a cortical network of left frontal and temporal regions. Activity in this network is language-selective, showing virtually no modulation by nonlinguistic tasks. In addition, language comprehension engages a second network consisting of bilateral frontal, parietal, cingulate, and insular regions. Activity in this “multiple demand” (MD) network scales with comprehension difficulty, but also with cognitive effort across a wide range of nonlinguistic tasks in a domain-general fashion. Given the functional dissociation between the language and MD networks, their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Prior neuroimaging studies have suggested that activity in each network covaries with some linguistic features that, behaviorally, influence on-line processing and comprehension. This sensitivity of the language and MD networks to local input characteristics has often been interpreted, implicitly or explicitly, as evidence that both networks track linguistic input closely, and in a manner consistent across individuals. Here, we used fMRI to directly test this assumption by comparing the BOLD signal time courses in each network across different people (n = 45, men and women) listening to the same story. Language network activity showed fewer individual differences, indicative of closer input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD networks. SIGNIFICANCE STATEMENT Language comprehension recruits both language-specific mechanisms and domain-general mechanisms that are engaged in many cognitive processes. In the human cortex, language-selective mechanisms are implemented in the left-lateralized “core language network”, whereas domain-general mechanisms are implemented in the bilateral “multiple demand” (MD) network. Here, we report the first direct comparison of the respective contributions of these networks to naturalistic story comprehension. Using a novel combination of neuroimaging approaches we find that MD regions track stories less closely than language regions. This finding constrains the possible contributions of the MD network to comprehension, contrasts with accounts positing that this network has continuous access to linguistic input, and suggests a new typology of comprehension processes based on their extent of input tracking. PMID:28871034

  4. Gradient-based adaptation of general gaussian kernels.

    PubMed

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  5. General Nonlinear Ferroelectric Model v. Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Wen; Robbins, Josh

    2017-03-14

    The purpose of this software is to function as a generalized ferroelectric material model. The material model is designed to work with existing finite element packages by providing updated information on material properties that are nonlinear and dependent on loading history. The two major nonlinear phenomena this model captures are domain-switching and phase transformation. The software itself does not contain potentially sensitive material information and instead provides a framework for different physical phenomena observed within ferroelectric materials. The model is calibrated to a specific ferroelectric material through input parameters provided by the user.

  6. General-Purpose Serial Interface For Remote Control

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  7. Getting quantitative about consequences of cross-ecosystem resource subsidies on recipient consumers

    USGS Publications Warehouse

    Richardson, John S.; Wipfli, Mark S.

    2016-01-01

    Most studies of cross-ecosystem resource subsidies have demonstrated positive effects on recipient consumer populations, often with very large effect sizes. However, it is important to move beyond these initial addition–exclusion experiments to consider the quantitative consequences for populations across gradients in the rates and quality of resource inputs. In our introduction to this special issue, we describe at least four potential models that describe functional relationships between subsidy input rates and consumer responses, most of them asymptotic. Here we aim to advance our quantitative understanding of how subsidy inputs influence recipient consumers and their communities. In the papers following, fish were either the recipient consumers or the subsidy as carcasses of anadromous species. Advancing general, predictive models will enable us to further consider what other factors are potentially co-limiting (e.g., nutrients, other population interactions, physical habitat, etc.) and better integrate resource subsidies into consumer–resource, biophysical dynamics models.

  8. Distributed approximating functional fit of the H{sub 3} {ital ab initio} potential-energy data of Liu and Siegbahn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frishman, A.; Hoffman, D.K.; Kouri, D.J.

    1997-07-01

    We report a distributed approximating functional (DAF) fit of the {ital ab initio} potential-energy data of Liu [J. Chem. Phys. {bold 58}, 1925 (1973)] and Siegbahn and Liu [{ital ibid}. {bold 68}, 2457 (1978)]. The DAF-fit procedure is based on a variational principle, and is systematic and general. Only two adjustable parameters occur in the DAF leading to a fit which is both accurate (to the level inherent in the input data; RMS error of 0.2765 kcal/mol) and smooth ({open_quotes}well-tempered,{close_quotes} in DAF terminology). In addition, the LSTH surface of Truhlar and Horowitz based on this same data [J. Chem. Phys.more » {bold 68}, 2466 (1978)] is itself approximated using only the values of the LSTH surface on the same grid coordinate points as the {ital ab initio} data, and the same DAF parameters. The purpose of this exercise is to demonstrate that the DAF delivers a well-tempered approximation to a known function that closely mimics the true potential-energy surface. As is to be expected, since there is only roundoff error present in the LSTH input data, even more significant figures of fitting accuracy are obtained. The RMS error of the DAF fit, of the LSTH surface at the input points, is 0.0274 kcal/mol, and a smooth fit, accurate to better than 1cm{sup {minus}1}, can be obtained using more than 287 input data points. {copyright} {ital 1997 American Institute of Physics.}« less

  9. Method for guessing the response of a physical system to an arbitrary input

    DOEpatents

    Wolpert, David H.

    1996-01-01

    Stacked generalization is used to minimize the generalization errors of one or more generalizers acting on a known set of input values and output values representing a physical manifestation and a transformation of that manifestation, e.g., hand-written characters to ASCII characters, spoken speech to computer command, etc. Stacked generalization acts to deduce the biases of the generalizer(s) with respect to a known learning set and then correct for those biases. This deduction proceeds by generalizing in a second space whose inputs are the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is the correct guess. Stacked generalization can be used to combine multiple generalizers or to provide a correction to a guess from a single generalizer.

  10. Identification of single-input-single-output quantum linear systems

    NASA Astrophysics Data System (ADS)

    Levitt, Matthew; GuÅ£ǎ, Mǎdǎlin

    2017-03-01

    The purpose of this paper is to investigate system identification for single-input-single-output general (active or passive) quantum linear systems. For a given input we address the following questions: (1) Which parameters can be identified by measuring the output? (2) How can we construct a system realization from sufficient input-output data? We show that for time-dependent inputs, the systems which cannot be distinguished are related by symplectic transformations acting on the space of system modes. This complements a previous result of Guţă and Yamamoto [IEEE Trans. Autom. Control 61, 921 (2016), 10.1109/TAC.2015.2448491] for passive linear systems. In the regime of stationary quantum noise input, the output is completely determined by the power spectrum. We define the notion of global minimality for a given power spectrum, and characterize globally minimal systems as those with a fully mixed stationary state. We show that in the case of systems with a cascade realization, the power spectrum completely fixes the transfer function, so the system can be identified up to a symplectic transformation. We give a method for constructing a globally minimal subsystem direct from the power spectrum. Restricting to passive systems the analysis simplifies so that identifiability may be completely understood from the eigenvalues of a particular system matrix.

  11. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  12. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    PubMed Central

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  13. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    PubMed

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  14. The (virtual) conceptual necessity of quantum probabilities in cognitive psychology.

    PubMed

    Blutner, Reinhard; beim Graben, Peter

    2013-06-01

    We propose a way in which Pothos & Busemeyer (P&B) could strengthen their position. Taking a dynamic stance, we consider cognitive tests as functions that transfer a given input state into the state after testing. Under very general conditions, it can be shown that testable properties in cognition form an orthomodular lattice. Gleason's theorem then yields the conceptual necessity of quantum probabilities (QP).

  15. USSR and Eastern Europe Scientific Abstracts, Electronics and Electrical Engineering, Number 27

    DTIC Science & Technology

    1977-02-10

    input and output conditions. The power section of the circuit is modified to permit triacs and thyristors, respectively, to function. The purpose of the...electronic materials, components, and devices, on circuit theory, pulse techniques, electromagnetic wave propagation, radar, quantum electronic theory...Lasers, Masers, Holography, Quasi-Optical 20 Microelectronics and General Circuit Theory and Information 21 Radars and Radio Wavigati on 22

  16. Optimal Output Trajectory Redesign for Invertible Systems

    NASA Technical Reports Server (NTRS)

    Devasia, S.

    1996-01-01

    Given a desired output trajectory, inversion-based techniques find input-state trajectories required to exactly track the output. These inversion-based techniques have been successfully applied to the endpoint tracking control of multijoint flexible manipulators and to aircraft control. The specified output trajectory uniquely determines the required input and state trajectories that are found through inversion. These input-state trajectories exactly track the desired output; however, they might not meet acceptable performance requirements. For example, during slewing maneuvers of flexible structures, the structural deformations, which depend on the required state trajectories, may be unacceptably large. Further, the required inputs might cause actuator saturation during an exact tracking maneuver, for example, in the flight control of conventional takeoff and landing aircraft. In such situations, a compromise is desired between the tracking requirement and other goals such as reduction of internal vibrations and prevention of actuator saturation; the desired output trajectory needs to redesigned. Here, we pose the trajectory redesign problem as an optimization of a general quadratic cost function and solve it in the context of linear systems. The solution is obtained as an off-line prefilter of the desired output trajectory. An advantage of our technique is that the prefilter is independent of the particular trajectory. The prefilter can therefore be precomputed, which is a major advantage over other optimization approaches. Previous works have addressed the issue of preshaping inputs to minimize residual and in-maneuver vibrations for flexible structures; Since the command preshaping is computed off-line. Further minimization of optimal quadratic cost functions has also been previously use to preshape command inputs for disturbance rejection. All of these approaches are applicable when the inputs to the system are known a priori. Typically, outputs (not inputs) are specified in tracking problems, and hence the input trajectories have to be computed. The inputs to the system are however, difficult to determine for non-minimum phase systems like flexible structures. One approach to solve this problem is to (1) choose a tracking controller (the desired output trajectory is now an input to the closed-loop system and (2) redesign this input to the closed-loop system. Thus we effectively perform output redesign. These redesigns are however, dependent on the choice of the tracking controllers. Thus the controller optimization and trajectory redesign problems become coupled; this coupled optimization is still an open problem. In contrast, we decouple the trajectory redesign problem from the choice of feedback-based tracking controller. It is noted that our approach remains valid when a particular tracking controller is chosen. In addition, the formulation of our problem not only allows for the minimization of residual vibration as in available techniques but also allows for the optimal reduction fo vibrations during the maneuver, e.g., the altitude control of flexible spacecraft. We begin by formulating the optimal output trajectory redesign problem and then solve it in the context of general linear systems. This theory is then applied to an example flexible structure, and simulation results are provided.

  17. Trajectory Software With Upper Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Barrett, Charles

    2012-01-01

    The Trajectory Software Applications 6.0 for the Dec Alpha platform has an implementation of the Jacchia-Lineberry Upper Atmosphere Density Model used in the Mission Control Center for International Space Station support. Previous trajectory software required an upper atmosphere to support atmosphere drag calculations in the Mission Control Center. The Functional operation will differ depending on the end-use of the module. In general, the calling routine will use function-calling arguments to specify input to the processor. The atmosphere model will then compute and return atmospheric density at the time of interest.

  18. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  19. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  20. Tools for Brain-Computer Interaction: A General Concept for a Hybrid BCI

    PubMed Central

    Müller-Putz, Gernot R.; Breitwieser, Christian; Cincotti, Febo; Leeb, Robert; Schreuder, Martijn; Leotta, Francesco; Tavella, Michele; Bianchi, Luigi; Kreilinger, Alex; Ramsay, Andrew; Rohm, Martin; Sagebaum, Max; Tonin, Luca; Neuper, Christa; Millán, José del. R.

    2011-01-01

    The aim of this work is to present the development of a hybrid Brain-Computer Interface (hBCI) which combines existing input devices with a BCI. Thereby, the BCI should be available if the user wishes to extend the types of inputs available to an assistive technology system, but the user can also choose not to use the BCI at all; the BCI is active in the background. The hBCI might decide on the one hand which input channel(s) offer the most reliable signal(s) and switch between input channels to improve information transfer rate, usability, or other factors, or on the other hand fuse various input channels. One major goal therefore is to bring the BCI technology to a level where it can be used in a maximum number of scenarios in a simple way. To achieve this, it is of great importance that the hBCI is able to operate reliably for long periods, recognizing and adapting to changes as it does so. This goal is only possible if many different subsystems in the hBCI can work together. Since one research institute alone cannot provide such different functionality, collaboration between institutes is necessary. To allow for such a collaboration, a new concept and common software framework is introduced. It consists of four interfaces connecting the classical BCI modules: signal acquisition, preprocessing, feature extraction, classification, and the application. But it provides also the concept of fusion and shared control. In a proof of concept, the functionality of the proposed system was demonstrated. PMID:22131973

  1. Transforming the Way We Teach Function Transformations

    ERIC Educational Resources Information Center

    Faulkenberry, Eileen Durand; Faulkenberry, Thomas J.

    2010-01-01

    In this article, the authors discuss "function," a well-defined rule that relates inputs to outputs. They have found that by using the input-output definition of "function," they can examine transformations of functions simply by looking at changes to input or output and the respective changes to the graph. Applying transformations to the input…

  2. Generic functional requirements for a NASA general-purpose data base management system

    NASA Technical Reports Server (NTRS)

    Lohman, G. M.

    1981-01-01

    Generic functional requirements for a general-purpose, multi-mission data base management system (DBMS) for application to remotely sensed scientific data bases are detailed. The motivation for utilizing DBMS technology in this environment is explained. The major requirements include: (1) a DBMS for scientific observational data; (2) a multi-mission capability; (3) user-friendly; (4) extensive and integrated information about data; (5) robust languages for defining data structures and formats; (6) scientific data types and structures; (7) flexible physical access mechanisms; (8) ways of representing spatial relationships; (9) a high level nonprocedural interactive query and data manipulation language; (10) data base maintenance utilities; (11) high rate input/output and large data volume storage; and adaptability to a distributed data base and/or data base machine configuration. Detailed functions are specified in a top-down hierarchic fashion. Implementation, performance, and support requirements are also given.

  3. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  4. Kinetic Energy of Hydrocarbons as a Function of Electron Density and Convolutional Neural Networks.

    PubMed

    Yao, Kun; Parkhill, John

    2016-03-08

    We demonstrate a convolutional neural network trained to reproduce the Kohn-Sham kinetic energy of hydrocarbons from an input electron density. The output of the network is used as a nonlocal correction to conventional local and semilocal kinetic functionals. We show that this approximation qualitatively reproduces Kohn-Sham potential energy surfaces when used with conventional exchange correlation functionals. The density which minimizes the total energy given by the functional is examined in detail. We identify several avenues to improve on this exploratory work, by reducing numerical noise and changing the structure of our functional. Finally we examine the features in the density learned by the neural network to anticipate the prospects of generalizing these models.

  5. Study on general theory of kinematics and dynamics of wheeled mobile robots

    NASA Astrophysics Data System (ADS)

    Tsukishima, Takahiro; Sasaki, Ken; Takano, Masaharu; Inoue, Kenji

    1992-03-01

    This paper proposes a general theory of kinematics and dynamics of wheeled mobile robots (WMRs). Unlike robotic manipulators which are modeled as 3-dimensional serial link mechanism, WMRs will be modeled as planar linkage mechanism with multiple links branching out from the base and/or another link. Since this model resembles a tree with branches, it will be called 'tree-structured-link'. The end of each link corresponds to the wheel which is in contact with the floor. In dynamics of WMR, equation of motion of a WMR is derived from joint input torques incorporating wheel dynamics. The wheel dynamics determines forces and moments acting on wheels as a function of slip velocity. This slippage of wheels is essential in dynamics of WMR. It will also be shown that the dynamics of WMR reduces to kinematics when slippage of wheels is neglected. Furthermore, the equation of dynamics is rewritten in velocity input form, since most of industrial motors are velocity controlled.

  6. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.

  7. Passive dendrites enable single neurons to compute linearly non-separable functions.

    PubMed

    Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris

    2013-01-01

    Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.

  8. Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions

    PubMed Central

    Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris

    2013-01-01

    Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions. PMID:23468600

  9. A class of all digital phase locked loops - Modeling and analysis

    NASA Technical Reports Server (NTRS)

    Reddy, C. P.; Gupta, S. C.

    1973-01-01

    An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a nonlinear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step and frequency step inputs for different levels of quantization without loop filter are studied. The analytical results are checked by simulating the actual system on the digital computer.

  10. Convergence and Periodic Solutions for the Input Impedance of a Standard Ladder Network

    ERIC Educational Resources Information Center

    Ucak, C.; Acar, C.

    2007-01-01

    The input impedance of an infinite ladder network is computed by using the recursive relation and by assuming that the input impedance does not change when a new block is added to the network. However, this assumption is not true in general and standard textbooks do not always treat these networks correctly. This paper develops a general solution…

  11. Shocks and storm sudden commencements

    NASA Technical Reports Server (NTRS)

    Smith, E. J.; Slavin, J. A.; Zwickl, R. D.; Bame, S. J.

    1986-01-01

    Recent gains in understanding the relationship between shocks and storm sudden commencements (SSCs) are reviewed with emphasis on spacecraft observations in general and ISEE-3 observations in particular. The topics discussed include the relation of SSC amplitude to increase in solar wind pressure, the inference of shock properties from SSC amplitudes, SSCs as representative of the transient response of the magnetosphere to a step function input, and magnetic storms accompanying shocks.

  12. Capacity of the generalized PPM channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Klimesh, Matt; McEliece, Bob; Moision, Bruce

    2004-01-01

    We show the capacity of a generalized pulse-position-modulation (PPM) channel, where the input vectors may be any set that allows a transitive group of coordinate permutations, is achieved by a uniform input distribution.

  13. Seismic data classification and artificial neural networks: can software replace eyeballs?

    NASA Astrophysics Data System (ADS)

    Reusch, D. B.; Larson, A. M.

    2006-05-01

    Modern seismic datasets are providing many new opportunities for furthering our understanding of our planet, ranging from the deep earth to the sub-ice sheet interface. With many geophysical applications, the large volume of these datasets raises issues of manageability in areas such as quality control (QC) and event identification (EI). While not universally true, QC can be a labor intensive, subjective (and thus not entirely reproducible) and uninspiring task when such datasets are involved. The EI process shares many of these drawbacks but has the benefit of (usually) being closer to interesting science-based questions. Here we explore two techniques from the field of artificial neural networks (ANNs) that seek to reduce the time requirements and increase the objectivity of QC and EI on seismic datasets. In particular, we focus on QC of receiver functions from broadband seismic data collected by the 2000-2003 Transantarctic Mountains Seismic Experiment (TAMSEIS). Self-organizing maps (SOMs) enable unsupervised classification of large, complex geophysical data sets (e.g., time series of the atmospheric circulation) into a fixed number of distinct generalized patterns or modes representing the probability distribution function of the input data. These patterns are organized spatially as a two-dimensional grid such that distances represent similarity (adjacent patterns will be most similar). After training, input data are matched to their most similar generalized pattern to produce frequency maps, i.e., what fraction of the data is represented best by each individual SOM pattern. Given a priori information on data quality (from previous manual grading) or event type, a probabilistic classification can be developed that gives a likelihood for each category of interest for each SOM pattern. New data are classified by identifying the closest matching pattern (without retraining) and examining the associated probabilities. Feed-forward ANNs (FFNNs) are a supervised classification tool that has been successfully used in a number of seismic applications (e.g., Langer et al, 2003; Del Pezzo et al 2003). FFNNs require a correct answer for each training record so that the transfer functions between input predictors and output predictions can be developed during training. After training, applying new input data to the FFNNs classifies the input based on the existing transfer functions. Key to the success of both approaches is the selection of proper predictor variables that reflect, to varying degrees, the criteria humans use when doing these tasks manually. SOMs also have the potential to assist in this selection process. Because SOMs and FFNNs are used in different ways, they can address different aspects of the overall data classification problem in complementary ways. While not the first application of computers to these problems, ANN-based tools bring unique characteristics to the problem of capturing human decision-making processes.

  14. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, volume 2, part 1. Appendix A: Software documentation

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.

    1982-01-01

    Documentation of the preliminary software developed as a framework for a generalized integrated robotic system simulation is presented. The program structure is composed of three major functions controlled by a program executive. The three major functions are: system definition, analysis tools, and post processing. The system definition function handles user input of system parameters and definition of the manipulator configuration. The analysis tools function handles the computational requirements of the program. The post processing function allows for more detailed study of the results of analysis tool function executions. Also documented is the manipulator joint model software to be used as the basis of the manipulator simulation which will be part of the analysis tools capability.

  15. Time Scale Hierarchies in the Functional Organization of Complex Behaviors

    PubMed Central

    Perdikis, Dionysios; Huys, Raoul; Jirsa, Viktor K.

    2011-01-01

    Traditional approaches to cognitive modelling generally portray cognitive events in terms of ‘discrete’ states (point attractor dynamics) rather than in terms of processes, thereby neglecting the time structure of cognition. In contrast, more recent approaches explicitly address this temporal dimension, but typically provide no entry points into cognitive categorization of events and experiences. With the aim to incorporate both these aspects, we propose a framework for functional architectures. Our approach is grounded in the notion that arbitrary complex (human) behaviour is decomposable into functional modes (elementary units), which we conceptualize as low-dimensional dynamical objects (structured flows on manifolds). The ensemble of modes at an agent’s disposal constitutes his/her functional repertoire. The modes may be subjected to additional dynamics (termed operational signals), in particular, instantaneous inputs, and a mechanism that sequentially selects a mode so that it temporarily dominates the functional dynamics. The inputs and selection mechanisms act on faster and slower time scales then that inherent to the modes, respectively. The dynamics across the three time scales are coupled via feedback, rendering the entire architecture autonomous. We illustrate the functional architecture in the context of serial behaviour, namely cursive handwriting. Subsequently, we investigate the possibility of recovering the contributions of functional modes and operational signals from the output, which appears to be possible only when examining the output phase flow (i.e., not from trajectories in phase space or time). PMID:21980278

  16. The advantage of flexible neuronal tunings in neural network models for motor learning

    PubMed Central

    Marongelli, Ellisha N.; Thoroughman, Kurt A.

    2013-01-01

    Human motor adaptation to novel environments is often modeled by a basis function network that transforms desired movement properties into estimated forces. This network employs a layer of nodes that have fixed broad tunings that generalize across the input domain. Learning is achieved by updating the weights of these nodes in response to training experience. This conventional model is unable to account for rapid flexibility observed in human spatial generalization during motor adaptation. However, added plasticity in the widths of the basis function tunings can achieve this flexibility, and several neurophysiological experiments have revealed flexibility in tunings of sensorimotor neurons. We found a model, Locally Weighted Projection Regression (LWPR), which uniquely possesses the structure of a basis function network in which both the weights and tuning widths of the nodes are updated incrementally during adaptation. We presented this LWPR model with training functions of different spatial complexities and monitored incremental updates to receptive field widths. An inverse pattern of dependence of receptive field adaptation on experienced error became evident, underlying both a relationship between generalization and complexity, and a unique behavior in which generalization always narrows after a sudden switch in environmental complexity. These results implicate a model that is flexible in both basis function widths and weights, like LWPR, as a viable alternative model for human motor adaptation that can account for previously observed plasticity in spatial generalization. This theory can be tested by using the behaviors observed in our experiments as novel hypotheses in human studies. PMID:23888141

  17. Programmable in vivo selection of arbitrary DNA sequences.

    PubMed

    Ben Yehezkel, Tuval; Biezuner, Tamir; Linshiz, Gregory; Mazor, Yair; Shapiro, Ehud

    2012-01-01

    The extraordinary fidelity, sensory and regulatory capacity of natural intracellular machinery is generally confined to their endogenous environment. Nevertheless, synthetic bio-molecular components have been engineered to interface with the cellular transcription, splicing and translation machinery in vivo by embedding functional features such as promoters, introns and ribosome binding sites, respectively, into their design. Tapping and directing the power of intracellular molecular processing towards synthetic bio-molecular inputs is potentially a powerful approach, albeit limited by our ability to streamline the interface of synthetic components with the intracellular machinery in vivo. Here we show how a library of synthetic DNA devices, each bearing an input DNA sequence and a logical selection module, can be designed to direct its own probing and processing by interfacing with the bacterial DNA mismatch repair (MMR) system in vivo and selecting for the most abundant variant, regardless of its function. The device provides proof of concept for programmable, function-independent DNA selection in vivo and provides a unique example of a logical-functional interface of an engineered synthetic component with a complex endogenous cellular system. Further research into the design, construction and operation of synthetic devices in vivo may lead to other functional devices that interface with other complex cellular processes for both research and applied purposes.

  18. Flexible Peripheral Component Interconnect Input/Output Card

    NASA Technical Reports Server (NTRS)

    Bigelow, Kirk K.; Jerry, Albert L.; Baricio, Alisha G.; Cummings, Jon K.

    2010-01-01

    The Flexible Peripheral Component Interconnect (PCI) Input/Output (I/O) Card is an innovative circuit board that provides functionality to interface between a variety of devices. It supports user-defined interrupts for interface synchronization, tracks system faults and failures, and includes checksum and parity evaluation of interface data. The card supports up to 16 channels of high-speed, half-duplex, low-voltage digital signaling (LVDS) serial data, and can interface combinations of serial and parallel devices. Placement of a processor within the field programmable gate array (FPGA) controls an embedded application with links to host memory over its PCI bus. The FPGA also provides protocol stacking and quick digital signal processor (DSP) functions to improve host performance. Hardware timers, counters, state machines, and other glue logic support interface communications. The Flexible PCI I/O Card provides an interface for a variety of dissimilar computer systems, featuring direct memory access functionality. The card has the following attributes: 8/16/32-bit, 33-MHz PCI r2.2 compliance, Configurable for universal 3.3V/5V interface slots, PCI interface based on PLX Technology's PCI9056 ASIC, General-use 512K 16 SDRAM memory, General-use 1M 16 Flash memory, FPGA with 3K to 56K logical cells with embedded 27K to 198K bits RAM, I/O interface: 32-channel LVDS differential transceivers configured in eight, 4-bit banks; signaling rates to 200 MHz per channel, Common SCSI-3, 68-pin interface connector.

  19. Information to cerebellum on spinal motor networks mediated by the dorsal spinocerebellar tract

    PubMed Central

    Stecina, Katinka; Fedirchuk, Brent; Hultborn, Hans

    2013-01-01

    The main objective of this review is to re-examine the type of information transmitted by the dorsal and ventral spinocerebellar tracts (DSCT and VSCT respectively) during rhythmic motor actions such as locomotion. Based on experiments in the 1960s and 1970s, the DSCT was viewed as a relay of peripheral sensory input to the cerebellum in general, and during rhythmic movements such as locomotion and scratch. In contrast, the VSCT was seen as conveying a copy of the output of spinal neuronal circuitry, including those circuits generating rhythmic motor activity (the spinal central pattern generator, CPG). Emerging anatomical and electrophysiological information on the putative subpopulations of DSCT and VSCT neurons suggest differentiated functions for some of the subpopulations. Multiple lines of evidence support the notion that sensory input is not the only source driving DSCT neurons and, overall, there is a greater similarity between DSCT and VSCT activity than previously acknowledged. Indeed the majority of DSCT cells can be driven by spinal CPGs for locomotion and scratch without phasic sensory input. It thus seems natural to propose the possibility that CPG input to some of these neurons may contribute to distinguishing sensory inputs that are a consequence of the active locomotion from those resulting from perturbations in the external world. PMID:23613538

  20. Analysis and synthesis of abstract data types through generalization from examples

    NASA Technical Reports Server (NTRS)

    Wild, Christian

    1987-01-01

    The discovery of general patterns of behavior from a set of input/output examples can be a useful technique in the automated analysis and synthesis of software systems. These generalized descriptions of the behavior form a set of assertions which can be used for validation, program synthesis, program testing and run-time monitoring. Describing the behavior is characterized as a learning process in which general patterns can be easily characterized. The learning algorithm must choose a transform function and define a subset of the transform space which is related to equivalence classes of behavior in the original domain. An algorithm for analyzing the behavior of abstract data types is presented and several examples are given. The use of the analysis for purposes of program synthesis is also discussed.

  1. Analysis of positron lifetime spectra in polymers

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.

    1988-01-01

    A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.

  2. The origins of metamodality in visual object area LO: Bodily topographical biases and increased functional connectivity to S1

    PubMed Central

    Tal, Zohar; Geva, Ran; Amedi, Amir

    2016-01-01

    Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction (PPI) analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain. PMID:26673114

  3. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1989-01-01

    Voice input for control of camera functions was investigated in this study. Objective were to (1) assess the feasibility of a voice-commanded camera control system, and (2) identify factors that differ between voice and manual control of camera functions. Subjects participated in a remote manipulation task that required extensive camera-aided viewing. Each subject was exposed to two conditions, voice and manual input, with a counterbalanced administration order. Voice input was found to be significantly slower than manual input for this task. However, in terms of remote manipulator performance errors and subject preference, there was no difference between modalities. Voice control of continuous camera functions is not recommended. It is believed that the use of voice input for discrete functions, such as multiplexing or camera switching, could aid performance. Hybrid mixes of voice and manual input may provide the best use of both modalities. This report contributes to a better understanding of the issues that affect the design of an efficient human/telerobot interface.

  4. GET electronics samples data analysis

    NASA Astrophysics Data System (ADS)

    Giovinazzo, J.; Goigoux, T.; Anvar, S.; Baron, P.; Blank, B.; Delagnes, E.; Grinyer, G. F.; Pancin, J.; Pedroza, J. L.; Pibernat, J.; Pollacco, E.; Rebii, A.; Roger, T.; Sizun, P.

    2016-12-01

    The General Electronics for TPCs (GET) has been developed to equip a generation of time projection chamber detectors for nuclear physics, and may also be used for a wider range of detector types. The goal of this paper is to propose first analysis procedures to be applied on raw data samples from the GET system, in order to correct for systematic effects observed on test measurements. We also present a method to estimate the response function of the GET system channels. The response function is required in analysis where the input signal needs to be reconstructed, in terms of time distribution, from the registered output samples.

  5. Safe Upper-Bounds Inference of Energy Consumption for Java Bytecode Applications

    NASA Technical Reports Server (NTRS)

    Navas, Jorge; Mendez-Lojo, Mario; Hermenegildo, Manuel V.

    2008-01-01

    Many space applications such as sensor networks, on-board satellite-based platforms, on-board vehicle monitoring systems, etc. handle large amounts of data and analysis of such data is often critical for the scientific mission. Transmitting such large amounts of data to the remote control station for analysis is usually too expensive for time-critical applications. Instead, modern space applications are increasingly relying on autonomous on-board data analysis. All these applications face many resource constraints. A key requirement is to minimize energy consumption. Several approaches have been developed for estimating the energy consumption of such applications (e.g. [3, 1]) based on measuring actual consumption at run-time for large sets of random inputs. However, this approach has the limitation that it is in general not possible to cover all possible inputs. Using formal techniques offers the potential for inferring safe energy consumption bounds, thus being specially interesting for space exploration and safety-critical systems. We have proposed and implemented a general frame- work for resource usage analysis of Java bytecode [2]. The user defines a set of resource(s) of interest to be tracked and some annotations that describe the cost of some elementary elements of the program for those resources. These values can be constants or, more generally, functions of the input data sizes. The analysis then statically derives an upper bound on the amount of those resources that the program as a whole will consume or provide, also as functions of the input data sizes. This article develops a novel application of the analysis of [2] to inferring safe upper bounds on the energy consumption of Java bytecode applications. We first use a resource model that describes the cost of each bytecode instruction in terms of the joules it consumes. With this resource model, we then generate energy consumption cost relations, which are then used to infer safe upper bounds. How energy consumption for each bytecode instruction is measured is beyond the scope of this paper. Instead, this paper is about how to infer safe energy consumption estimations assuming that those energy consumption costs are provided. For concreteness, we use a simplified version of an existing resource model [1] in which an energy consumption cost for individual Java opcodes is defined.

  6. Optimization with Fuzzy Data via Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Kosiński, Witold

    2010-09-01

    Order fuzzy numbers (OFN) that make possible to deal with fuzzy inputs quantitatively, exactly in the same way as with real numbers, have been recently defined by the author and his 2 coworkers. The set of OFN forms a normed space and is a partially ordered ring. The case when the numbers are presented in the form of step functions, with finite resolution, simplifies all operations and the representation of defuzzification functionals. A general optimization problem with fuzzy data is formulated. Its fitness function attains fuzzy values. Since the adjoint space to the space of OFN is finite dimensional, a convex combination of all linear defuzzification functionals may be used to introduce a total order and a real-valued fitness function. Genetic operations on individuals representing fuzzy data are defined.

  7. Noninvasive quantification of cerebral metabolic rate for glucose in rats using 18F-FDG PET and standard input function

    PubMed Central

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-01-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947

  8. Noninvasive quantification of cerebral metabolic rate for glucose in rats using (18)F-FDG PET and standard input function.

    PubMed

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-10-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.

  9. Fusion of Asynchronous, Parallel, Unreliable Data Streams

    DTIC Science & Technology

    2010-09-01

    channels that might be used. The two channels chosen for this study, galvanic skin response (GSR) and pulse rate, are convenient and reasonably well...vector as NA. The MDS software tool, PERMAP, uses this same abbreviation. The impact of the lack of information may vary depending on the situation...of how PERMAP (and MDS in general) functions when the input parameters are varied. That is outlined in this section; the impact of those choices is

  10. Theory, Methods, and Applications of Nonlinear Control

    DTIC Science & Technology

    2012-08-29

    an application to Lotka - Volterra systems,” in Proceedings of the American Control Conference (St. Louis, MO, 10-12 June 2009), pp. 96-101. [MM10a...Mazenc, F., and M. Malisoff, “Strict Lyapunov function constructions under LaSalle conditions with an application to Lotka - Volterra systems,” IEEE...the tracking dynamics, (d) the applicability of the theory to a very general class of reference trajectories, and (e) the use of input-to-state

  11. Highly fault-tolerant parallel computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, D.A.

    We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less

  12. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  13. Responses of tree and insect herbivores to elevated nitrogen inputs: A meta-analysis

    NASA Astrophysics Data System (ADS)

    Li, Furong; Dudley, Tom L.; Chen, Baoming; Chang, Xiaoyu; Liang, Liyin; Peng, Shaolin

    2016-11-01

    Increasing atmospheric nitrogen (N) inputs have the potential to alter terrestrial ecosystem function through impacts on plant-herbivore interactions. The goal of our study is to search for a general pattern in responses of tree characteristics important for herbivores and insect herbivorous performance to elevated N inputs. We conducted a meta-analysis based on 109 papers describing impacts of nitrogen inputs on tree characteristics and 16 papers on insect performance. The differences in plant characteristics and insect performance between broadleaves and conifers were also explored. Tree aboveground biomass, leaf biomass and leaf N concentration significantly increased under elevated N inputs. Elevated N inputs had no significantly overall effect on concentrations of phenolic compounds and lignin but adversely affected tannin, as defensive chemicals for insect herbivores. Additionally, the overall effect of insect herbivore performance (including development time, insect biomass, relative growth rate, and so on) was significantly increased by elevated N inputs. According to the inconsistent responses between broadleaves and conifers, broadleaves would be more likely to increase growth by light interception and photosynthesis rather than producing more defensive chemicals to elevated N inputs by comparison with conifers. Moreover, the overall carbohydrate concentration was significantly reduced by 13.12% in broadleaves while increased slightly in conifers. The overall tannin concentration decreased significantly by 39.21% in broadleaves but a 5.8% decrease in conifers was not significant. The results of the analysis indicated that elevated N inputs would provide more food sources and ameliorate tree palatability for insects, while the resistance of trees against their insect herbivores was weakened, especially for broadleaves. Thus, global forest insect pest problems would be aggravated by elevated N inputs. As N inputs continue to rise in the future, forest ecosystem management should pay more attention to insect pest, especially in the regions dominated by broadleaves.

  14. Layer-specific input to distinct cell types in layer 6 of monkey primary visual cortex.

    PubMed

    Briggs, F; Callaway, E M

    2001-05-15

    Layer 6 of monkey V1 contains a physiologically and anatomically diverse population of excitatory pyramidal neurons. Distinctive arborization patterns of axons and dendrites within the functionally specialized cortical layers define eight types of layer 6 pyramidal neurons and suggest unique information processing roles for each cell type. To address how input sources contribute to cellular function, we examined the laminar sources of functional excitatory input onto individual layer 6 pyramidal neurons using scanning laser photostimulation. We find that excitatory input sources correlate with cell type. Class I neurons with axonal arbors selectively targeting magnocellular (M) recipient layer 4Calpha receive input from M-dominated layer 4B, whereas class I neurons whose axonal arbors target parvocellular (P) recipient layer 4Cbeta receive input from P-dominated layer 2/3. Surprisingly, these neuronal types do not differ significantly in the inputs they receive directly from layers 4Calpha or 4Cbeta. Class II cells, which lack dense axonal arbors within layer 4C, receive excitatory input from layers targeted by their local axons. Specifically, type IIA cells project axons to and receive input from the deep but not superficial layers. Type IIB neurons project to and receive input from the deepest and most superficial, but not middle layers. Type IIC neurons arborize throughout the cortical layers and tend to receive inputs from all cortical layers. These observations have implications for the functional roles of different layer 6 cell types in visual information processing.

  15. Standardization of a Hierarchical Transactive Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammerstrom, Donald J.; Oliver, Terry V.; Melton, Ronald B.

    2010-12-03

    The authors describe work they have conducted toward the generalization and standardization of the transactive control approach that was first demonstrated in the Olympic Peninsula Project for the management of a transmission constraint. The newly generalized approach addresses several potential shortfalls of the prior approach: First, the authors have formalized a hierarchical node structure which defines the nodes and the functional signal pathways between these nodes. Second, by fully generalizing the inputs, outputs, and functional responsibilities of each node, the authors make the approach available to a much wider set of responsive assets and operational objectives. Third, the new, generalizedmore » approach defines transactive signals that include the predicted day-ahead future. This predictive feature allows the market-like bids and offers to become resolved iteratively over time, thus allowing the behaviors of responsive assets to be called upon both for the present and as future dispatch decisions are being made. The hierarchical transactive control approach is a key feature of a proposed Pacific Northwest smart grid demonstration.« less

  16. A class of all digital phase locked loops - Modelling and analysis.

    NASA Technical Reports Server (NTRS)

    Reddy, C. P.; Gupta, S. C.

    1972-01-01

    An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a non-linear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step, and frequency step inputs for different levels of quantization without loop filter, are studied. The analytical results are checked by simulating the actual system on the digital computer.

  17. Harmonize input selection for sediment transport prediction

    NASA Astrophysics Data System (ADS)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  18. Regional or general anesthesia for fast-track hip and knee replacement - what is the evidence?

    PubMed Central

    Kehlet, Henrik; Aasvang, Eske Kvanner

    2015-01-01

    Regional anesthesia for knee and hip arthroplasty may have favorable outcome effects compared with general anesthesia by effectively blocking afferent input, providing initial postoperative analgesia, reducing endocrine metabolic responses, and providing sympathetic blockade with reduced bleeding and less risk of thromboembolic complications but with undesirable effects on lower limb motor and urinary bladder function. Old randomized studies supported the use of regional anesthesia with fewer postoperative pulmonary and thromboembolic complications, and this has been supported by recent large non-randomized epidemiological database cohort studies. In contrast, the data from newer randomized trials are conflicting, and recent studies using modern general anesthetic techniques may potentially support the use of general versus spinal anesthesia. In summary, the lack of properly designed large randomized controlled trials comparing modern general anesthesia and spinal anesthesia for knee and hip arthroplasty prevents final recommendations and calls for prospective detailed studies in this clinically important field. PMID:26918127

  19. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  20. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    PubMed

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  1. A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.

    PubMed

    Eikenberry, Steffen E; Marmarelis, Vasilis Z

    2013-02-01

    We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.

  2. AESOP- INTERACTIVE DESIGN OF LINEAR QUADRATIC REGULATORS AND KALMAN FILTERS

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.

    1994-01-01

    AESOP was developed to solve a number of problems associated with the design of controls and state estimators for linear time-invariant systems. The systems considered are modeled in state-variable form by a set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are the linear quadratic regulator (LQR) design problem and the steady-state Kalman filter design problem. AESOP is designed to be used in an interactive manner. The user can solve design problems and analyze the solutions in a single interactive session. Both numerical and graphical information are available to the user during the session. The AESOP program is structured around a list of predefined functions. Each function performs a single computation associated with control, estimation, or system response determination. AESOP contains over sixty functions and permits the easy inclusion of user defined functions. The user accesses these functions either by inputting a list of desired functions in the order they are to be performed, or by specifying a single function to be performed. The latter case is used when the choice of function and function order depends on the results of previous functions. The available AESOP functions are divided into several general areas including: 1) program control, 2) matrix input and revision, 3) matrix formation, 4) open-loop system analysis, 5) frequency response, 6) transient response, 7) transient function zeros, 8) LQR and Kalman filter design, 9) eigenvalues and eigenvectors, 10) covariances, and 11) user-defined functions. The most important functions are those that design linear quadratic regulators and Kalman filters. The user interacts with AESOP when using these functions by inputting design weighting parameters and by viewing displays of designed system response. Support functions obtain system transient and frequency responses, transfer functions, and covariance matrices. AESOP can also provide the user with open-loop system information including stability, controllability, and observability. The AESOP program is written in FORTRAN IV for interactive execution and has been implemented on an IBM 3033 computer using TSS 370. As currently configured, AESOP has a central memory requirement of approximately 2 Megs of 8 bit bytes. Memory requirements can be reduced by redimensioning arrays in the AESOP program. Graphical output requires adaptation of the AESOP plot routines to whatever device is available. The AESOP program was developed in 1984.

  3. Uncertainty in modeled upper ocean heat content change

    NASA Astrophysics Data System (ADS)

    Tokmakian, Robin; Challenor, Peter

    2014-02-01

    This paper examines the uncertainty in the change in the heat content in the ocean component of a general circulation model. We describe the design and implementation of our statistical methodology. Using an ensemble of model runs and an emulator, we produce an estimate of the full probability distribution function (PDF) for the change in upper ocean heat in an Atmosphere/Ocean General Circulation Model, the Community Climate System Model v. 3, across a multi-dimensional input space. We show how the emulator of the GCM's heat content change and hence, the PDF, can be validated and how implausible outcomes from the emulator can be identified when compared to observational estimates of the metric. In addition, the paper describes how the emulator outcomes and related uncertainty information might inform estimates of the same metric from a multi-model Coupled Model Intercomparison Project phase 3 ensemble. We illustrate how to (1) construct an ensemble based on experiment design methods, (2) construct and evaluate an emulator for a particular metric of a complex model, (3) validate the emulator using observational estimates and explore the input space with respect to implausible outcomes and (4) contribute to the understanding of uncertainties within a multi-model ensemble. Finally, we estimate the most likely value for heat content change and its uncertainty for the model, with respect to both observations and the uncertainty in the value for the input parameters.

  4. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  5. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  6. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  7. 7 CFR 1424.4 - General eligibility rules.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS BIOENERGY PROGRAM § 1424.4 General eligibility.... (d) For producers not purchasing raw commodity inputs, the production must equal or exceed that amount of production that would be calculated using the raw commodity inputs and the conversion factor...

  8. The Impacts and Economic Costs of Climate Change in Agriculture and the Costs and Benefits of Adaptation

    NASA Astrophysics Data System (ADS)

    Iglesias, A.; Quiroga, S.; Garrote, L.; Cunningham, R.

    2012-04-01

    This paper provides monetary estimates of the effects of agricultural adaptation to climate change in Europe. The model computes spatial crop productivity changes as a response to climate change linking biophysical and socioeconomic components. It combines available data sets of crop productivity changes under climate change (Iglesias et al 2011, Ciscar et al 2011), statistical functions of productivity response to water and nitrogen inputs, catchment level water availability, and environmental policy scenarios. Future global change scenarios are derived from several socio-economic futures of representative concentration pathways and regional climate models. The economic valuation is conducted by using GTAP general equilibrium model. The marginal productivity changes has been used as an input for the economic general equilibrium model in order to analyse the economic impact of the agricultural changes induced by climate change in the world. The study also includes the analysis of an adaptive capacity index computed by using the socio-economic results of GTAP. The results are combined to prioritize agricultural adaptation policy needs in Europe.

  9. Dendritic sodium channels promote active decorrelation and reduce phase locking to parkinsonian input oscillations in model globus pallidus neurons

    PubMed Central

    Edgerton, Jeremy R.; Jaeger, Dieter

    2011-01-01

    Correlated firing among populations of neurons is present throughout the brain and is often rhythmic in nature, observable as an oscillatory fluctuation in the local field potential. Although rhythmic population activity is believed to be critical for normal function in many brain areas, synchronized neural oscillations are associated with disease states in other cases. In the globus pallidus (GP in rodents, homolog of the primate GPe), pairs of neurons generally have uncorrelated firing in normal animals despite an anatomical organization suggesting that they should receive substantial common input. By contrast, correlated and rhythmic GP firing is observed in animal models of Parkinson's disease (PD). Based in part on these findings it has been proposed that an important part of basal ganglia function is active decorrelation, whereby redundant information is compressed. Mechanisms that implement active decorrelation, and changes that cause it to fail in PD, are subjects of great interest. Rat GP neurons express fast, transient voltage-dependent sodium channels (NaF channels) in their dendrites, with the expression level being highest near asymmetric synapses. We recently showed that the dendritic NaF density strongly influences the responsiveness of model GP neurons to synchronous excitatory inputs. In the present study we use rat GP neuron models to show that dendritic NaF channel expression is a potential cellular mechanism of active decorrelation. We further show that model neurons with lower dendritic NaF channel expression have a greater tendency to phase lock with oscillatory synaptic input patterns like those observed in PD. PMID:21795543

  10. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  11. Understanding How Kurtosis Is Transferred from Input Acceleration to Stress Response and Its Influence on Fatigue Llife

    NASA Technical Reports Server (NTRS)

    Kihm, Frederic; Rizzi, Stephen A.; Ferguson, Neil S.; Halfpenny, Andrew

    2013-01-01

    High cycle fatigue of metals typically occurs through long term exposure to time varying loads which, although modest in amplitude, give rise to microscopic cracks that can ultimately propagate to failure. The fatigue life of a component is primarily dependent on the stress amplitude response at critical failure locations. For most vibration tests, it is common to assume a Gaussian distribution of both the input acceleration and stress response. In real life, however, it is common to experience non-Gaussian acceleration input, and this can cause the response to be non-Gaussian. Examples of non-Gaussian loads include road irregularities such as potholes in the automotive world or turbulent boundary layer pressure fluctuations for the aerospace sector or more generally wind, wave or high amplitude acoustic loads. The paper first reviews some of the methods used to generate non-Gaussian excitation signals with a given power spectral density and kurtosis. The kurtosis of the response is examined once the signal is passed through a linear time invariant system. Finally an algorithm is presented that determines the output kurtosis based upon the input kurtosis, the input power spectral density and the frequency response function of the system. The algorithm is validated using numerical simulations. Direct applications of these results include improved fatigue life estimations and a method to accelerate shaker tests by generating high kurtosis, non-Gaussian drive signals.

  12. Brownian systems with spatially inhomogeneous activity

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Brader, J. M.

    2017-09-01

    We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.

  13. Input Range Testing for the General Mission Analysis Tool (GMAT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2007-01-01

    This document contains a test plan for testing input values to the General Mission Analysis Tool (GMAT). The plan includes four primary types of information, which rigorously define all tests that should be performed to validate that GMAT will accept allowable inputs and deny disallowed inputs. The first is a complete list of all allowed object fields in GMAT. The second type of information, is test input to be attempted for each field. The third type of information is allowable input values for all objects fields in GMAT. The final piece of information is how GMAT should respond to both valid and invalid information. It is VERY important to note that the tests below must be performed for both the Graphical User Interface and the script!! The examples are illustrated using a scripting perspective, because it is simpler to write up. However, the test must be performed for both interfaces to GMAT.

  14. Compact universal logic gates realized using quantization of current in nanodevices.

    PubMed

    Zhang, Wancheng; Wu, Nan-Jian; Yang, Fuhua

    2007-12-12

    This paper proposes novel universal logic gates using the current quantization characteristics of nanodevices. In nanodevices like the electron waveguide (EW) and single-electron (SE) turnstile, the channel current is a staircase quantized function of its control voltage. We use this unique characteristic to compactly realize Boolean functions. First we present the concept of the periodic-threshold threshold logic gate (PTTG), and we build a compact PTTG using EW and SE turnstiles. We show that an arbitrary three-input Boolean function can be realized with a single PTTG, and an arbitrary four-input Boolean function can be realized by using two PTTGs. We then use one PTTG to build a universal programmable two-input logic gate which can be used to realize all two-input Boolean functions. We also build a programmable three-input logic gate by using one PTTG. Compared with linear threshold logic gates, with the PTTG one can build digital circuits more compactly. The proposed PTTGs are promising for future smart nanoscale digital system use.

  15. Development and Characterization of a Rate-Dependent Three-Dimensional Macroscopic Plasticity Model Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2015-01-01

    Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.

  16. Quantum Mechanical Noise in a Michelson Interferometer with Nonclassical Inputs: Nonperturbative Treatment

    NASA Technical Reports Server (NTRS)

    King, Sun-Kun

    1996-01-01

    The variances of the quantum-mechanical noise in a two-input-port Michelson interferometer within the framework of the Loudon-Ni model were solved exactly in two general cases: (1) one coherent state input and one squeezed state input, and (2) two photon number states inputs. Low intensity limit, exponential decaying signal and the noise due to mixing were discussed briefly.

  17. Extended H2 synthesis for multiple degree-of-freedom controllers

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Knospe, Carl R.

    1992-01-01

    H2 synthesis techniques are developed for a general multiple-input-multiple-output (MIMO) system subject to both stochastic and deterministic disturbances. The H2 synthesis is extended by incorporation of anticipated disturbances power-spectral-density information into the controller-design process, as well as by frequency weightings of generalized coordinates and control inputs. The methodology is applied to a simple single-input-multiple-output (SIMO) problem, analogous to the type of vibration isolation problem anticipated in microgravity research experiments.

  18. Differential inputs to striatal cholinergic and parvalbumin interneurons imply functional distinctions

    PubMed Central

    Klug, Jason R; Engelhardt, Max D; Cadman, Cara N; Li, Hao; Smith, Jared B; Ayala, Sarah; Williams, Elora W; Hoffman, Hilary

    2018-01-01

    Striatal cholinergic (ChAT) and parvalbumin (PV) interneurons exert powerful influences on striatal function in health and disease, yet little is known about the organization of their inputs. Here using rabies tracing, electrophysiology and genetic tools, we compare the whole-brain inputs to these two types of striatal interneurons and dissect their functional connectivity in mice. ChAT interneurons receive a substantial cortical input from associative regions of cortex, such as the orbitofrontal cortex. Amongst subcortical inputs, a previously unknown inhibitory thalamic reticular nucleus input to striatal PV interneurons is identified. Additionally, the external segment of the globus pallidus targets striatal ChAT interneurons, which is sufficient to inhibit tonic ChAT interneuron firing. Finally, we describe a novel excitatory pathway from the pedunculopontine nucleus that innervates ChAT interneurons. These results establish the brain-wide direct inputs of two major types of striatal interneurons and allude to distinct roles in regulating striatal activity and controlling behavior. PMID:29714166

  19. Complexity and non-commutativity of learning operations on graphs.

    PubMed

    Atmanspacher, Harald; Filk, Thomas

    2006-07-01

    We present results from numerical studies of supervised learning operations in small recurrent networks considered as graphs, leading from a given set of input conditions to predetermined outputs. Graphs that have optimized their output for particular inputs with respect to predetermined outputs are asymptotically stable and can be characterized by attractors, which form a representation space for an associative multiplicative structure of input operations. As the mapping from a series of inputs onto a series of such attractors generally depends on the sequence of inputs, this structure is generally non-commutative. Moreover, the size of the set of attractors, indicating the complexity of learning, is found to behave non-monotonically as learning proceeds. A tentative relation between this complexity and the notion of pragmatic information is indicated.

  20. Quantitative myocardial perfusion from static cardiac and dynamic arterial CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.

    2018-05-01

    Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.

  1. Dynamic extreme learning machine and its approximation capability.

    PubMed

    Zhang, Rui; Lan, Yuan; Huang, Guang-Bin; Xu, Zong-Ben; Soh, Yeng Chai

    2013-12-01

    Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron alike and perform well in both regression and classification applications. The problem of determining the suitable network architectures is recognized to be crucial in the successful application of ELMs. This paper first proposes a dynamic ELM (D-ELM) where the hidden nodes can be recruited or deleted dynamically according to their significance to network performance, so that not only the parameters can be adjusted but also the architecture can be self-adapted simultaneously. Then, this paper proves in theory that such D-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results obtained over various test problems demonstrate and verify that the proposed D-ELM does a good job reducing the network size while preserving good generalization performance.

  2. Design of synthetic biological logic circuits based on evolutionary algorithm.

    PubMed

    Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei

    2013-08-01

    The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.

  3. Probabilistic DHP adaptive critic for nonlinear stochastic control systems.

    PubMed

    Herzallah, Randa

    2013-06-01

    Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. A General Cognitive Diagnosis Model for Expert-Defined Polytomous Attributes

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy

    2013-01-01

    Polytomous attributes, particularly those defined as part of the test development process, can provide additional diagnostic information. The present research proposes the polytomous generalized deterministic inputs, noisy, "and" gate (pG-DINA) model to accommodate such attributes. The pG-DINA model allows input from substantive experts…

  5. Electrometer Amplifier With Overload Protection

    NASA Technical Reports Server (NTRS)

    Woeller, F. H.; Alexander, R.

    1986-01-01

    Circuit features low noise, input offset, and high linearity. Input preamplifier includes input-overload protection and nulling circuit to subtract dc offset from output. Prototype dc amplifier designed for use with ion detector has features desirable in general laboratory and field instrumentation.

  6. Functional Gene Differences in Soil Microbial Communities from Conventional, Low-Input, and Organic Farmlands

    PubMed Central

    Xue, Kai; Wu, Liyou; Deng, Ye; He, Zhili; Van Nostrand, Joy; Robertson, Philip G.; Schmidt, Thomas M.

    2013-01-01

    Various agriculture management practices may have distinct influences on soil microbial communities and their ecological functions. In this study, we utilized GeoChip, a high-throughput microarray-based technique containing approximately 28,000 probes for genes involved in nitrogen (N)/carbon (C)/sulfur (S)/phosphorus (P) cycles and other processes, to evaluate the potential functions of soil microbial communities under conventional (CT), low-input (LI), and organic (ORG) management systems at an agricultural research site in Michigan. Compared to CT, a high diversity of functional genes was observed in LI. The functional gene diversity in ORG did not differ significantly from that of either CT or LI. Abundances of genes encoding enzymes involved in C/N/P/S cycles were generally lower in CT than in LI or ORG, with the exceptions of genes in pathways for lignin degradation, methane generation/oxidation, and assimilatory N reduction, which all remained unchanged. Canonical correlation analysis showed that selected soil (bulk density, pH, cation exchange capacity, total C, C/N ratio, NO3−, NH4+, available phosphorus content, and available potassium content) and crop (seed and whole biomass) variables could explain 69.5% of the variation of soil microbial community composition. Also, significant correlations were observed between NO3− concentration and denitrification genes, NH4+ concentration and ammonification genes, and N2O flux and denitrification genes, indicating a close linkage between soil N availability or process and associated functional genes. PMID:23241975

  7. The Frog Vestibular System as a Model for Lesion-Induced Plasticity: Basic Neural Principles and Implications for Posture Control

    PubMed Central

    Lambert, François M.; Straka, Hans

    2011-01-01

    Studies of behavioral consequences after unilateral labyrinthectomy have a long tradition in the quest of determining rules and limitations of the central nervous system (CNS) to exert plastic changes that assist the recuperation from the loss of sensory inputs. Frogs were among the first animal models to illustrate general principles of regenerative capacity and reorganizational neural flexibility after a vestibular lesion. The continuous successful use of the latter animals is in part based on the easy access and identifiability of nerve branches to inner ear organs for surgical intervention, the possibility to employ whole brain preparations for in vitro studies and the limited degree of freedom of postural reflexes for quantification of behavioral impairments and subsequent improvements. Major discoveries that increased the knowledge of post-lesional reactive mechanisms in the CNS include alterations in vestibular commissural signal processing and activation of cooperative changes in excitatory and inhibitory inputs to disfacilitated neurons. Moreover, the observed increase of synaptic efficacy in propriospinal circuits illustrates the importance of limb proprioceptive inputs for postural recovery. Accumulated evidence suggests that the lesion-induced neural plasticity is not a goal-directed process that aims toward a meaningful restoration of vestibular reflexes but rather attempts a survival of those neurons that have lost their excitatory inputs. Accordingly, the reaction mechanism causes an improvement of some components but also a deterioration of other aspects as seen by spatio-temporally inappropriate vestibulo-motor responses, similar to the consequences of plasticity processes in various sensory systems and species. The generality of the findings indicate that frogs continue to form a highly amenable vertebrate model system for exploring molecular and physiological events during cellular and network reorganization after a loss of vestibular function. PMID:22518109

  8. A Point-process Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation

    PubMed Central

    Luo, X.; Gee, S.; Sohal, V.; Small, D.

    2015-01-01

    Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process (neuronal spikes) while the input is another high frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, Point-process Responses for Optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the- curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. PMID:26411923

  9. Family medicine outpatient encounters are more complex than those of cardiology and psychiatry.

    PubMed

    Katerndahl, David; Wood, Robert; Jaén, Carlos Roberto

    2011-01-01

    comparison studies suggest that the guideline-concordant care provided for specific medical conditions is less optimal in primary care compared with cardiology and psychiatry settings. The purpose of this study is to estimate the relative complexity of patient encounters in general/family practice, cardiology, and psychiatry settings. secondary analysis of the 2000 National Ambulatory Medical Care Survey data for ambulatory patients seen in general/family practice, cardiology, and psychiatry settings was performed. The complexity for each variable was estimated as the quantity weighted by variability and diversity. there is minimal difference in the unadjusted input and total encounter complexity of general/family practice and cardiology; psychiatry's input is less complex. Cardiology encounters involved more input quantitatively, but the diversity of general/family practice input eliminated the difference. Cardiology also involved more complex output. However, when the duration of visit is factored in, the complexity of care provided per hour in general/family practice is 33% more relative to cardiology and 5 times more relative to psychiatry. care during family physician visits is more complex per hour than the care during visits to cardiologists or psychiatrists. This may account for a lower rate of completion of process items measured for quality of care.

  10. Transfer Function Control for Biometric Monitoring System

    NASA Technical Reports Server (NTRS)

    Chmiel, Alan J. (Inventor); Grodinsky, Carlos M. (Inventor); Humphreys, Bradley T. (Inventor)

    2015-01-01

    A modular apparatus for acquiring biometric data may include circuitry operative to receive an input signal indicative of a biometric condition, the circuitry being configured to process the input signal according to a transfer function thereof and to provide a corresponding processed input signal. A controller is configured to provide at least one control signal to the circuitry to programmatically modify the transfer function of the modular system to facilitate acquisition of the biometric data.

  11. Awake vs. anesthetized: layer-specific sensory processing in visual cortex and functional connectivity between cortical areas

    PubMed Central

    Sellers, Kristin K.; Bennett, Davis V.; Hutt, Axel; Williams, James H.

    2015-01-01

    During general anesthesia, global brain activity and behavioral state are profoundly altered. Yet it remains mostly unknown how anesthetics alter sensory processing across cortical layers and modulate functional cortico-cortical connectivity. To address this gap in knowledge of the micro- and mesoscale effects of anesthetics on sensory processing in the cortical microcircuit, we recorded multiunit activity and local field potential in awake and anesthetized ferrets (Mustela putoris furo) during sensory stimulation. To understand how anesthetics alter sensory processing in a primary sensory area and the representation of sensory input in higher-order association areas, we studied the local sensory responses and long-range functional connectivity of primary visual cortex (V1) and prefrontal cortex (PFC). Isoflurane combined with xylazine provided general anesthesia for all anesthetized recordings. We found that anesthetics altered the duration of sensory-evoked responses, disrupted the response dynamics across cortical layers, suppressed both multimodal interactions in V1 and sensory responses in PFC, and reduced functional cortico-cortical connectivity between V1 and PFC. Together, the present findings demonstrate altered sensory responses and impaired functional network connectivity during anesthesia at the level of multiunit activity and local field potential across cortical layers. PMID:25833839

  12. Design of fuzzy systems using neurofuzzy networks.

    PubMed

    Figueiredo, M; Gomide, F

    1999-01-01

    This paper introduces a systematic approach for fuzzy system design based on a class of neural fuzzy networks built upon a general neuron model. The network structure is such that it encodes the knowledge learned in the form of if-then fuzzy rules and processes data following fuzzy reasoning principles. The technique provides a mechanism to obtain rules covering the whole input/output space as well as the membership functions (including their shapes) for each input variable. Such characteristics are of utmost importance in fuzzy systems design and application. In addition, after learning, it is very simple to extract fuzzy rules in the linguistic form. The network has universal approximation capability, a property very useful in, e.g., modeling and control applications. Here we focus on function approximation problems as a vehicle to illustrate its usefulness and to evaluate its performance. Comparisons with alternative approaches are also included. Both, nonnoisy and noisy data have been studied and considered in the computational experiments. The neural fuzzy network developed here and, consequently, the underlying approach, has shown to provide good results from the accuracy, complexity, and system design points of view.

  13. Factor demand in Swedish manufacturing industry with special reference to the demand for energy. Instantaneous adjustment models; some results

    NASA Astrophysics Data System (ADS)

    Sjoeholm, K. R.

    1981-02-01

    The dual approach to the theory of production is used to estimate factor demand functions of the Swedish manufacturing industry. Two approximations of the cost function, the translog and the generalized Leontief models, are used. The price elasticities of the factor demand do not seem to depend on the choice of model. This is at least true as to the sign pattern and as to the inputs capital, labor, total energy and other materials. Total energy is separated into solid fuels, gasoline, fuel oil, electricity and a residual. Fuel oil and electricity are found to be substitutes by both models. Capital and energy are shown to be substitutes. This implies that Swedish industry will save more energy if the capital cost can be reduced. Both models are, in the best versions, able to detect an inappropriate variable. The assumption of perfect competition on the product market, is shown to be inadequate by both models. When this assumption is relaxed, the normal substitution pattern among the inputs is resumed.

  14. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  15. Theory of fiber-optic, evanescent-wave spectroscopy and sensors

    NASA Astrophysics Data System (ADS)

    Messica, A.; Greenstein, A.; Katzir, A.

    1996-05-01

    A general theory for fiber-optic, evanescent-wave spectroscopy and sensors is presented for straight, uncladded, step-index, multimode fibers. A three-dimensional model is formulated within the framework of geometric optics. The model includes various launching conditions, input and output end-face Fresnel transmission losses, multiple Fresnel reflections, bulk absorption, and evanescent-wave absorption. An evanescent-wave sensor response is analyzed as a function of externally controlled parameters such as coupling angle, f number, fiber length, and diameter. Conclusions are drawn for several experimental apparatuses.

  16. When the feasibility of an ecosystem is sufficient for global stability?

    PubMed

    Porati, A; Granero, M I

    2000-01-01

    We show via a Liapunov function that in every model ecosystem governed by generalized Lotka-Volterra equations, a feasible steady state is globally asymptotically stable if the number of interaction branches equals n-1, where n is the number of species. This means that the representative graph for which the theorem holds is a 'tree' and not only an alimentary chain. Our result is valid also in the case of non-homogeneous systems, which model situations in which input fluxes are present.

  17. Effects of Meteorological Data Quality on Snowpack Modeling

    NASA Astrophysics Data System (ADS)

    Havens, S.; Marks, D. G.; Robertson, M.; Hedrick, A. R.; Johnson, M.

    2017-12-01

    Detailed quality control of meteorological inputs is the most time-intensive component of running the distributed, physically-based iSnobal snow model, and the effect of data quality of the inputs on the model is unknown. The iSnobal model has been run operationally since WY2013, and is currently run in several basins in Idaho and California. The largest amount of user input during modeling is for the quality control of precipitation, temperature, relative humidity, solar radiation, wind speed and wind direction inputs. Precipitation inputs require detailed user input and are crucial to correctly model the snowpack mass. This research applies a range of quality control methods to meteorological input, from raw input with minimal cleaning, to complete user-applied quality control. The meteorological input cleaning generally falls into two categories. The first is global minimum/maximum and missing value correction that could be corrected and/or interpolated with automated processing. The second category is quality control for inputs that are not globally erroneous, yet are still unreasonable and generally indicate malfunctioning measurement equipment, such as temperature or relative humidity that remains constant, or does not correlate with daily trends observed at nearby stations. This research will determine how sensitive model outputs are to different levels of quality control and guide future operational applications.

  18. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    PubMed

    Kawabata, Takeshi

    2018-07-01

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  19. How much to trust the senses: Likelihood learning

    PubMed Central

    Sato, Yoshiyuki; Kording, Konrad P.

    2014-01-01

    Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975

  20. Real-Time Feedback Control of Flow-Induced Cavity Tones. Part 1; Fixed-Gain Control

    NASA Technical Reports Server (NTRS)

    Kegerise, M. A.; Cabell, R. H.; Cattafesta, L. N., III

    2006-01-01

    A generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The control algorithm demonstrated multiple Rossiter-mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. Controller performance was evaluated with a measure of output disturbance rejection and an input sensitivity transfer function. The results suggest that disturbances entering the cavity flow are collocated with the control input at the cavity leading edge. In that case, only tonal components of the cavity wall-pressure fluctuations can be suppressed and arbitrary broadband pressure reduction is not possible with the present sensor/actuator arrangement. In the control-algorithm development, the cavity dynamics were treated as linear and time invariant (LTI) for a fixed Mach number. The experimental results lend support to that treatment.

  1. Incorporation of Damage and Failure into an Orthotropic Elasto-Plastic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in the composite impact models currently available in LS-DYNA(Registered Trademark) is under development. In particular, the material model, which is being implemented as MAT 213 into a tailored version of LS-DYNA being jointly developed by the FAA and NASA, incorporates both plasticity and damage within the material model, utilizes experimentally based tabulated input to define the evolution of plasticity and damage as opposed to specifying discrete input parameters (such as modulus and strength), and is able to analyze the response of composites composed with a variety of fiber architectures. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. The capability to account for the rate and temperature dependent deformation response of composites has also been incorporated into the material model. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The onset of material failure, and thus element deletion, is being developed to be a function of the stresses and plastic strains in the various coordinate directions. Systematic procedures are being developed to generate the required input parameters based on the results of experimental tests.

  2. Multi-tissue analysis of co-expression networks by higher-order generalized singular value decomposition identifies functionally coherent transcriptional modules.

    PubMed

    Xiao, Xiaolin; Moreno-Moral, Aida; Rotival, Maxime; Bottolo, Leonardo; Petretto, Enrico

    2014-01-01

    Recent high-throughput efforts such as ENCODE have generated a large body of genome-scale transcriptional data in multiple conditions (e.g., cell-types and disease states). Leveraging these data is especially important for network-based approaches to human disease, for instance to identify coherent transcriptional modules (subnetworks) that can inform functional disease mechanisms and pathological pathways. Yet, genome-scale network analysis across conditions is significantly hampered by the paucity of robust and computationally-efficient methods. Building on the Higher-Order Generalized Singular Value Decomposition, we introduce a new algorithmic approach for efficient, parameter-free and reproducible identification of network-modules simultaneously across multiple conditions. Our method can accommodate weighted (and unweighted) networks of any size and can similarly use co-expression or raw gene expression input data, without hinging upon the definition and stability of the correlation used to assess gene co-expression. In simulation studies, we demonstrated distinctive advantages of our method over existing methods, which was able to recover accurately both common and condition-specific network-modules without entailing ad-hoc input parameters as required by other approaches. We applied our method to genome-scale and multi-tissue transcriptomic datasets from rats (microarray-based) and humans (mRNA-sequencing-based) and identified several common and tissue-specific subnetworks with functional significance, which were not detected by other methods. In humans we recapitulated the crosstalk between cell-cycle progression and cell-extracellular matrix interactions processes in ventricular zones during neocortex expansion and further, we uncovered pathways related to development of later cognitive functions in the cortical plate of the developing brain which were previously unappreciated. Analyses of seven rat tissues identified a multi-tissue subnetwork of co-expressed heat shock protein (Hsp) and cardiomyopathy genes (Bag3, Cryab, Kras, Emd, Plec), which was significantly replicated using separate failing heart and liver gene expression datasets in humans, thus revealing a conserved functional role for Hsp genes in cardiovascular disease.

  3. Creating Synthetic Coronal Observational Data From MHD Models: The Forward Technique

    NASA Technical Reports Server (NTRS)

    Rachmeler, Laurel A.; Gibson, Sarah E.; Dove, James; Kucera, Therese Ann

    2010-01-01

    We present a generalized forward code for creating simulated corona) observables off the limb from numerical and analytical MHD models. This generalized forward model is capable of creating emission maps in various wavelengths for instruments such as SXT, EIT, EIS, and coronagraphs, as well as spectropolari metric images and line profiles. The inputs to our code can be analytic models (of which four come with the code) or 2.5D and 3D numerical datacubes. We present some examples of the observable data created with our code as well as its functional capabilities. This code is currently available for beta-testing (contact authors), with the ultimate goal of release as a SolarSoft package

  4. Quantitative verification of ab initio self-consistent laser theory.

    PubMed

    Ge, Li; Tandy, Robert J; Stone, A D; Türeci, Hakan E

    2008-10-13

    We generalize and test the recent "ab initio" self-consistent (AISC) time-independent semiclassical laser theory. This self-consistent formalism generates all the stationary lasing properties in the multimode regime (frequencies, thresholds, internal and external fields, output power and emission pattern) from simple inputs: the dielectric function of the passive cavity, the atomic transition frequency, and the transverse relaxation time of the lasing transition.We find that the theory gives excellent quantitative agreement with full time-dependent simulations of the Maxwell-Bloch equations after it has been generalized to drop the slowly-varying envelope approximation. The theory is infinite order in the non-linear hole-burning interaction; the widely used third order approximation is shown to fail badly.

  5. GD SDR Automatic Gain Control Characterization Testing

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) will provide experimenters an opportunity to develop and demonstrate experimental waveforms in space. The GD SDR platform and initial waveform were characterized on the ground before launch and the data will be compared to the data that will be collected during on-orbit operations. A desired function of the SDR is to estimate the received signal to noise ratio (SNR), which would enable experimenters to better determine on-orbit link conditions. The GD SDR does not have an SNR estimator, but it does have an analog and a digital automatic gain control (AGC). The AGCs can be used to estimate the SDR input power which can be converted into a SNR. Tests were conducted to characterize the AGC response to changes in SDR input power and temperature. This purpose of this paper is to describe the tests that were conducted, discuss the results showi ng how the AGCs relate to the SDR input power, and provide recommendations for AGC testing and characterization.

  6. GD SDR Automatic Gain Control Characterization Testing

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) will provide experimenters an opportunity to develop and demonstrate experimental waveforms in space. The GD SDR platform and initial waveform were characterized on the ground before launch and the data will be compared to the data that will be collected during on-orbit operations. A desired function of the SDR is to estimate the received signal to noise ratio (SNR), which would enable experimenters to better determine on-orbit link conditions. The GD SDR does not have an SNR estimator, but it does have an analog and a digital automatic gain control (AGC). The AGCs can be used to estimate the SDR input power which can be converted into a SNR. Tests were conducted to characterize the AGC response to changes in SDR input power and temperature. This purpose of this paper is to describe the tests that were conducted, discuss the results showing how the AGCs relate to the SDR input power, and provide recommendations for AGC testing and characterization.

  7. Zero-dynamics principle for perfect quantum memory in linear networks

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoki; James, Matthew R.

    2014-07-01

    In this paper, we study a general linear networked system that contains a tunable memory subsystem; that is, it is decoupled from an optical field for state transportation during the storage process, while it couples to the field during the writing or reading process. The input is given by a single photon state or a coherent state in a pulsed light field. We then completely and explicitly characterize the condition required on the pulse shape achieving the perfect state transfer from the light field to the memory subsystem. The key idea to obtain this result is the use of zero-dynamics principle, which in our case means that, for perfect state transfer, the output field during the writing process must be a vacuum. A useful interpretation of the result in terms of the transfer function is also given. Moreover, a four-node network composed of atomic ensembles is studied as an example, demonstrating how the input field state is transferred to the memory subsystem and what the input pulse shape to be engineered for perfect memory looks like.

  8. Generalized constitutive equations for piezo-actuated compliant mechanism

    NASA Astrophysics Data System (ADS)

    Cao, Junyi; Ling, Mingxiang; Inman, Daniel J.; Lin, Jin

    2016-09-01

    This paper formulates analytical models to describe the static displacement and force interactions between generic serial-parallel compliant mechanisms and their loads by employing the matrix method. In keeping with the familiar piezoelectric constitutive equations, the generalized constitutive equations of compliant mechanism represent the input-output displacement and force relations in the form of a generalized Hooke’s law and as analytical functions of physical parameters. Also significantly, a new model of output displacement for compliant mechanism interacting with piezo-stacks and elastic loads is deduced based on the generalized constitutive equations. Some original findings differing from the well-known constitutive performance of piezo-stacks are also given. The feasibility of the proposed models is confirmed by finite element analysis and by experiments under various elastic loads. The analytical models can be an insightful tool for predicting and optimizing the performance of a wide class of compliant mechanisms that simultaneously consider the influence of loads and piezo-stacks.

  9. Existence conditions for unknown input functional observers

    NASA Astrophysics Data System (ADS)

    Fernando, T.; MacDougall, S.; Sreeram, V.; Trinh, H.

    2013-01-01

    This article presents necessary and sufficient conditions for the existence and design of an unknown input Functional observer. The existence of the observer can be verified by computing a nullspace of a known matrix and testing some matrix rank conditions. The existence of the observer does not require the satisfaction of the observer matching condition (i.e. Equation (16) in Hou and Muller 1992, 'Design of Observers for Linear Systems with Unknown Inputs', IEEE Transactions on Automatic Control, 37, 871-875), is not limited to estimating scalar functionals and allows for arbitrary pole placement. The proposed observer always exists when a state observer exists for the unknown input system, and furthermore, the proposed observer can exist even in some instances when an unknown input state observer does not exist.

  10. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel.

    PubMed

    Brown, Angus M

    2006-04-01

    The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.

  11. Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.

    PubMed

    Nath, Abhigyan; Subbiah, Karthikeyan

    2015-12-01

    Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance than that of individual base classifiers. The performance of the learned models trained on Kmeans preprocessed training set is far better than the randomly generated training sets. The proposed method achieved a sensitivity of 90.6%, specificity of 91.4% and accuracy of 91.0% on the first test set and sensitivity of 92.9%, specificity of 96.2% and accuracy of 94.7% on the second blind test set. These results have established that diversifying training set improves the performance of predictive models through superior generalization ability and balancing the training set improves prediction accuracy. For smaller data sets, unsupervised Kmeans based sampling can be an effective technique to increase generalization than that of the usual random splitting method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Homeostasis, singularities, and networks.

    PubMed

    Golubitsky, Martin; Stewart, Ian

    2017-01-01

    Homeostasis occurs in a biological or chemical system when some output variable remains approximately constant as an input parameter [Formula: see text] varies over some interval. We discuss two main aspects of homeostasis, both related to the effect of coordinate changes on the input-output map. The first is a reformulation of homeostasis in the context of singularity theory, achieved by replacing 'approximately constant over an interval' by 'zero derivative of the output with respect to the input at a point'. Unfolding theory then classifies all small perturbations of the input-output function. In particular, the 'chair' singularity, which is especially important in applications, is discussed in detail. Its normal form and universal unfolding [Formula: see text] is derived and the region of approximate homeostasis is deduced. The results are motivated by data on thermoregulation in two species of opossum and the spiny rat. We give a formula for finding chair points in mathematical models by implicit differentiation and apply it to a model of lateral inhibition. The second asks when homeostasis is invariant under appropriate coordinate changes. This is false in general, but for network dynamics there is a natural class of coordinate changes: those that preserve the network structure. We characterize those nodes of a given network for which homeostasis is invariant under such changes. This characterization is determined combinatorially by the network topology.

  13. Uncertainties in Galactic Chemical Evolution Models

    DOE PAGES

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.; ...

    2016-06-15

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  15. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  16. INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE

    PubMed Central

    Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval

    2008-01-01

    SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077

  17. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  18. Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

    NASA Astrophysics Data System (ADS)

    Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2016-04-01

    High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.

  19. Transverse momentum dependent (TMD) parton distribution functions generated in the modified DGLAP formalism based on the valence-like distributions

    NASA Astrophysics Data System (ADS)

    Hosseinkhani, H.; Modarres, M.; Olanj, N.

    2017-07-01

    Transverse momentum dependent (TMD) parton distributions, also referred to as unintegrated parton distribution functions (UPDFs), are produced via the Kimber-Martin-Ryskin (KMR) prescription. The GJR08 set of parton distribution functions (PDFs) which are based on the valence-like distributions is used, at the leading order (LO) and the next-to-leading order (NLO) approximations, as inputs of the KMR formalism. The general and the relative behaviors of the generated TMD PDFs at LO and NLO and their ratios in a wide range of the transverse momentum values, i.e. kt2 = 10, 102, 104 and 108GeV2 are investigated. It is shown that the properties of the parent valence-like PDFs are imprinted on the daughter TMD PDFs. Imposing the angular ordering constraint (AOC) leads to the dynamical variable limits on the integrals which in turn increase the contributions from the lower scales at lower kt2. The results are compared with our previous studies based on the MSTW2008 input PDFs and it is shown that the present calculation gives flatter TMD PDFs. Finally, a comparison of longitudinal structure function (FL) is made by using the produced TMD PDFs and those that were generated through the MSTW2008-LO PDF from our previous work and the corresponding data from H1 and ZEUS collaborations and a reasonable agreement is found.

  20. Significance of Input Correlations in Striatal Function

    PubMed Central

    Yim, Man Yi; Aertsen, Ad; Kumar, Arvind

    2011-01-01

    The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia. PMID:22125480

  1. Cellular registration without behavioral recall of olfactory sensory input under general anesthesia.

    PubMed

    Samuelsson, Andrew R; Brandon, Nicole R; Tang, Pei; Xu, Yan

    2014-04-01

    Previous studies suggest that sensory information is "received" but not "perceived" under general anesthesia. Whether and to what extent the brain continues to process sensory inputs in a drug-induced unconscious state remain unclear. One hundred seven rats were randomly assigned to 12 different anesthesia and odor exposure paradigms. The immunoreactivities of the immediate early gene products c-Fos and Egr1 as neural activity markers were combined with behavioral tests to assess the integrity and relationship of cellular and behavioral responsiveness to olfactory stimuli under a surgical plane of ketamine-xylazine general anesthesia. The olfactory sensory processing centers could distinguish the presence or absence of experimental odorants even when animals were fully anesthetized. In the anesthetized state, the c-Fos immunoreactivity in the higher olfactory cortices revealed a difference between novel and familiar odorants similar to that seen in the awake state, suggesting that the anesthetized brain functions beyond simply receiving external stimulation. Reexposing animals to odorants previously experienced only under anesthesia resulted in c-Fos immunoreactivity, which was similar to that elicited by familiar odorants, indicating that previous registration had occurred in the anesthetized brain. Despite the "cellular memory," however, odor discrimination and forced-choice odor-recognition tests showed absence of behavioral recall of the registered sensations, except for a longer latency in odor recognition tests. Histologically distinguishable registration of sensory processing continues to occur at the cellular level under ketamine-xylazine general anesthesia despite the absence of behavioral recognition, consistent with the notion that general anesthesia causes disintegration of information processing without completely blocking cellular communications.

  2. Cellular Registration Without Behavioral Recall Of Olfactory Sensory Input Under General Anesthesia

    PubMed Central

    Samuelsson, Andrew R.; Brandon, Nicole R.; Tang, Pei; Xu, Yan

    2014-01-01

    Background Previous studies suggest that sensory information is “received” but not “perceived” under general anesthesia. Whether and to what extent the brain continues to process sensory inputs in a drug-induced unconscious state remain unclear. Methods 107 rats were randomly assigned to 12 different anesthesia and odor exposure paradigms. The immunoreactivities of the immediate early gene products c-Fos and Egr1 as neural activity markers were combined with behavioral tests to assess the integrity and relationship of cellular and behavioral responsiveness to olfactory stimuli under a surgical plane of ketamine-xylazine general anesthesia. Results The olfactory sensory processing centers can distinguish the presence or absence of experimental odorants even when animals were fully anesthetized. In the anesthetized state, the c-Fos immunoreactivity in the higher olfactory cortices revealed a difference between novel and familiar odorants similar to that seen in the awake state, suggesting that the anesthetized brain functions beyond simply receiving external stimulation. Re-exposing animals to odorants previously experienced only under anesthesia resulted in c-Fos immunoreactivity similar to that elicited by familiar odorants, indicating that previous registration had occurred in the anesthetized brain. Despite the “cellular memory,” however, odor discrimination and forced-choice odor-recognition tests showed absence of behavioral recall of the registered sensations, except for a longer latency in odor recognition tests. Conclusions Histologically distinguishable registration of sensory process continues to occur at cellular level under ketamine-xylazine general anesthesia despite the absence of behavioral recognition, consistent with the notion that general anesthesia causes disintegration of information processing without completely blocking cellular communications. PMID:24694846

  3. Parallel, but Dissociable, Processing in Discrete Corticostriatal Inputs Encodes Skill Learning.

    PubMed

    Kupferschmidt, David A; Juczewski, Konrad; Cui, Guohong; Johnson, Kari A; Lovinger, David M

    2017-10-11

    Changes in cortical and striatal function underlie the transition from novel actions to refined motor skills. How discrete, anatomically defined corticostriatal projections function in vivo to encode skill learning remains unclear. Using novel fiber photometry approaches to assess real-time activity of associative inputs from medial prefrontal cortex to dorsomedial striatum and sensorimotor inputs from motor cortex to dorsolateral striatum, we show that associative and sensorimotor inputs co-engage early in action learning and disengage in a dissociable manner as actions are refined. Disengagement of associative, but not sensorimotor, inputs predicts individual differences in subsequent skill learning. Divergent somatic and presynaptic engagement in both projections during early action learning suggests potential learning-related in vivo modulation of presynaptic corticostriatal function. These findings reveal parallel processing within associative and sensorimotor circuits that challenges and refines existing views of corticostriatal function and expose neuronal projection- and compartment-specific activity dynamics that encode and predict action learning. Published by Elsevier Inc.

  4. Exploration of maximum count rate capabilities for large-area photon counting arrays based on polycrystalline silicon thin-film transistors

    NASA Astrophysics Data System (ADS)

    Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua

    2016-03-01

    Pixelated photon counting detectors with energy discrimination capabilities are of increasing clinical interest for x-ray imaging. Such detectors, presently in clinical use for mammography and under development for breast tomosynthesis and spectral CT, usually employ in-pixel circuits based on crystalline silicon - a semiconductor material that is generally not well-suited for economic manufacture of large-area devices. One interesting alternative semiconductor is polycrystalline silicon (poly-Si), a thin-film technology capable of creating very large-area, monolithic devices. Similar to crystalline silicon, poly-Si allows implementation of the type of fast, complex, in-pixel circuitry required for photon counting - operating at processing speeds that are not possible with amorphous silicon (the material currently used for large-area, active matrix, flat-panel imagers). The pixel circuits of two-dimensional photon counting arrays are generally comprised of four stages: amplifier, comparator, clock generator and counter. The analog front-end (in particular, the amplifier) strongly influences performance and is therefore of interest to study. In this paper, the relationship between incident and output count rate of the analog front-end is explored under diagnostic imaging conditions for a promising poly-Si based design. The input to the amplifier is modeled in the time domain assuming a realistic input x-ray spectrum. Simulations of circuits based on poly-Si thin-film transistors are used to determine the resulting output count rate as a function of input count rate, energy discrimination threshold and operating conditions.

  5. Auditory scene analysis in school-aged children with developmental language disorders

    PubMed Central

    Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.

    2014-01-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430

  6. How to Assess the Signature of the Data: Catchments and Aquifers as Input Processing Systems

    NASA Astrophysics Data System (ADS)

    Lischeid, G.

    2010-12-01

    It has been argued recently that hydrological models should not only mimic observed data, but should reproduce the signatures of the data appropriately. However, there is no consent how these signatures could be assessed. In general, hydrological models aim at predicting groundwater head dynamics or hydrograph response to input signals (e.g., groundwater recharge, effective rain), based on information about structural properties of the system, like e.g., transmissivity fields, soil hydraulic conductivity, or size of the catchment water storage. That approach usually faces substantial spatial heterogeneities and nonlinear feedbacks. Here, an alternative approach is suggested for characterizing catchments or aquifers as input signal processing systems. The concept was developed for remote areas where direct anthropogenic effects (groundwater withdrawal, injection wells, etc.), plant water uptake and evaporation from groundwater and streams are negligible. Then, any increase of groundwater head or discharge is related to a corresponding input signal, i.e., groundwater recharge or effective rainfall. That signal propagates through the system and is increasingly attenuated and decelerated with increasing flowpath length. This attenuation differs from simple low-pass-filtering. E.g., different input signals propagate at different velocities, depending on rainfall intensity, antecedent soil moisture, etc. The new approach is based on a principal component analysis of time series of groundwater or lake water level, soil water content, or discharge at different sites. This information is used to for assessing the functional properties of the system rather than its structural heterogeneity at different measurement sites, and to assess first order controls on its spatial patterns. Thus, hydrologic measurements provide a mean to measure the functional properties of the system. It is suggested to use this as signatures of the data. In a next step, model structure can be optimized, focusing on representing these signatures. Furthermore, even the unknown input signal can be assessed, making the catchment or aquifer a giant effective rain sampler. Examples will be presented including heterogeneous and sparse data sets, and an extension to a more complex system with various production wells of a large water supply work.

  7. Vegetation pattern formation in a fog-dependent ecosystem.

    PubMed

    Borthagaray, Ana I; Fuentes, Miguel A; Marquet, Pablo A

    2010-07-07

    Vegetation pattern formation is a striking characteristic of several water-limited ecosystems around the world. Typically, they have been described on runoff-based ecosystems emphasizing local interactions between water, biomass interception, growth and dispersal. Here, we show that this situation is by no means general, as banded patterns in vegetation can emerge in areas without rainfall and in plants without functional root (the Bromeliad Tillandsia landbeckii) and where fog is the principal source of moisture. We show that a simple model based on the advection of fog-water by wind and its interception by the vegetation can reproduce banded patterns which agree with empirical patterns observed in the Coastal Atacama Desert. Our model predicts how the parameters may affect the conditions to form the banded pattern, showing a transition from a uniform vegetated state, at high water input or terrain slope to a desert state throughout intermediate banded states. Moreover, the model predicts that the pattern wavelength is a decreasing non-linear function of fog-water input and slope, and an increasing function of plant loss and fog-water flow speed. Finally, we show that the vegetation density is increased by the formation of the regular pattern compared to the density expected by the spatially homogeneous model emphasizing the importance of self-organization in arid ecosystems. (c) 2010 Elsevier Ltd. All rights reserved.

  8. Implementation of a Tabulated Failure Model Into a Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther

    2017-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current paper, the complete development of the failure model is described and the generation of a tabulated failure surface for a representative composite material is discussed.

  9. Monaural and binaural contributions to interaural-level-difference sensitivity in human auditory cortex.

    PubMed

    Stecker, G Christopher; McLaughlin, Susan A; Higgins, Nathan C

    2015-10-15

    Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55-85 dB SPL, binaural 55-85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values. Copyright © 2015. Published by Elsevier Inc.

  10. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    NASA Astrophysics Data System (ADS)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  11. Solar electric propulsion thrust subsystem development

    NASA Technical Reports Server (NTRS)

    Masek, T. D.

    1973-01-01

    The Solar Electric Propulsion System developed under this program was designed to demonstrate all the thrust subsystem functions needed on an unmanned planetary vehicle. The demonstration included operation of the basic elements, power matching input and output voltage regulation, three-axis thrust vector control, subsystem automatic control including failure detection and correction capability (using a PDP-11 computer), operation of critical elements in thermal-vacuum-, zero-gravity-type propellant storage, and data outputs from all subsystem elements. The subsystem elements, functions, unique features, and test setup are described. General features and capabilities of the test-support data system are also presented. The test program culminated in a 1500-h computer-controlled, system-functional demonstration. This included simultaneous operation of two thruster/power conditioner sets. The results of this testing phase satisfied all the program goals.

  12. Least dissipation cost as a design principle for robustness and function of cellular networks

    NASA Astrophysics Data System (ADS)

    Han, Bo; Wang, Jin

    2008-03-01

    From a study of the budding yeast cell cycle, we found that the cellular network evolves to have the least cost for realizing its biological function. We quantify the cost in terms of the dissipation or heat loss characterized through the steady-state properties: the underlying landscape and the associated flux. We found that the dissipation cost is intimately related to the stability and robustness of the network. With the least dissipation cost, the network becomes most stable and robust under mutations and perturbations on the sharpness of the response from input to output as well as self-degradations. The least dissipation cost may provide a general design principle for the cellular network to survive from the evolution and realize the biological function.

  13. Utilizing Hierarchical Clustering to improve Efficiency of Self-Organizing Feature Map to Identify Hydrological Homogeneous Regions

    NASA Astrophysics Data System (ADS)

    Farsadnia, Farhad; Ghahreman, Bijan

    2016-04-01

    Hydrologic homogeneous group identification is considered both fundamental and applied research in hydrology. Clustering methods are among conventional methods to assess the hydrological homogeneous regions. Recently, Self-Organizing feature Map (SOM) method has been applied in some studies. However, the main problem of this method is the interpretation on the output map of this approach. Therefore, SOM is used as input to other clustering algorithms. The aim of this study is to apply a two-level Self-Organizing feature map and Ward hierarchical clustering method to determine the hydrologic homogenous regions in North and Razavi Khorasan provinces. At first by principal component analysis, we reduced SOM input matrix dimension, then the SOM was used to form a two-dimensional features map. To determine homogeneous regions for flood frequency analysis, SOM output nodes were used as input into the Ward method. Generally, the regions identified by the clustering algorithms are not statistically homogeneous. Consequently, they have to be adjusted to improve their homogeneity. After adjustment of the homogeneity regions by L-moment tests, five hydrologic homogeneous regions were identified. Finally, adjusted regions were created by a two-level SOM and then the best regional distribution function and associated parameters were selected by the L-moment approach. The results showed that the combination of self-organizing maps and Ward hierarchical clustering by principal components as input is more effective than the hierarchical method, by principal components or standardized inputs to achieve hydrologic homogeneous regions.

  14. AOIPS 3 user's guide. Volume 2: Program descriptions

    NASA Technical Reports Server (NTRS)

    Schotz, Steve S.; Piper, Thomas S.; Negri, Andrew J.

    1990-01-01

    The Atmospheric and Oceanographic Information Processing System (AOIPS) 3 is the version of the AOIPS software as of April 1989. The AOIPS software was developed jointly by the Goddard Space Flight Center and General Sciences Corporation. A detailed description of very AOIPS program is presented. It is intended to serve as a reference for such items as program functionality, program operational instructions, and input/output variable descriptions. Program descriptions are derived from the on-line help information. Each program description is divided into two sections. The functional description section describes the purpose of the program and contains any pertinent operational information. The program description sections lists the program variables as they appear on-line, and describes them in detail.

  15. Radial Basis Function Neural Network Application to Power System Restoration Studies

    PubMed Central

    Sadeghkhani, Iman; Ketabi, Abbas; Feuillet, Rene

    2012-01-01

    One of the most important issues in power system restoration is overvoltages caused by transformer switching. These overvoltages might damage some equipment and delay power system restoration. This paper presents a radial basis function neural network (RBFNN) to study transformer switching overvoltages. To achieve good generalization capability for developed RBFNN, equivalent parameters of the network are added to RBFNN inputs. The developed RBFNN is trained with the worst-case scenario of switching angle and remanent flux and tested for typical cases. The simulated results for a partial of 39-bus New England test system show that the proposed technique can estimate the peak values and duration of switching overvoltages with good accuracy. PMID:22792093

  16. The role of domain-general cognitive control in language comprehension

    PubMed Central

    Fedorenko, Evelina

    2014-01-01

    What role does domain-general cognitive control play in understanding linguistic input? Although much evidence has suggested that domain-general cognitive control and working memory resources are sometimes recruited during language comprehension, many aspects of this relationship remain elusive. For example, how frequently do cognitive control mechanisms get engaged when we understand language? And is this engagement necessary for successful comprehension? I here (a) review recent brain imaging evidence for the neural separability of the brain regions that support high-level linguistic processing vs. those that support domain-general cognitive control abilities; (b) define the space of possibilities for the relationship between these sets of brain regions; and (c) review the available evidence that constrains these possibilities to some extent. I argue that we should stop asking whether domain-general cognitive control mechanisms play a role in language comprehension, and instead focus on characterizing the division of labor between the cognitive control brain regions and the more functionally specialized language regions. PMID:24803909

  17. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  18. Proposed minimum reporting standards for chemical analysis Chemical Analysis Working Group (CAWG) Metabolomics Standards Initiative (MSI)

    PubMed Central

    Amberg, Alexander; Barrett, Dave; Beale, Michael H.; Beger, Richard; Daykin, Clare A.; Fan, Teresa W.-M.; Fiehn, Oliver; Goodacre, Royston; Griffin, Julian L.; Hankemeier, Thomas; Hardy, Nigel; Harnly, James; Higashi, Richard; Kopka, Joachim; Lane, Andrew N.; Lindon, John C.; Marriott, Philip; Nicholls, Andrew W.; Reily, Michael D.; Thaden, John J.; Viant, Mark R.

    2013-01-01

    There is a general consensus that supports the need for standardized reporting of metadata or information describing large-scale metabolomics and other functional genomics data sets. Reporting of standard metadata provides a biological and empirical context for the data, facilitates experimental replication, and enables the re-interrogation and comparison of data by others. Accordingly, the Metabolomics Standards Initiative is building a general consensus concerning the minimum reporting standards for metabolomics experiments of which the Chemical Analysis Working Group (CAWG) is a member of this community effort. This article proposes the minimum reporting standards related to the chemical analysis aspects of metabolomics experiments including: sample preparation, experimental analysis, quality control, metabolite identification, and data pre-processing. These minimum standards currently focus mostly upon mass spectrometry and nuclear magnetic resonance spectroscopy due to the popularity of these techniques in metabolomics. However, additional input concerning other techniques is welcomed and can be provided via the CAWG on-line discussion forum at http://msi-workgroups.sourceforge.net/ or http://Msi-workgroups-feedback@lists.sourceforge.net. Further, community input related to this document can also be provided via this electronic forum. PMID:24039616

  19. Reconstruction of Twist Torque in Main Parachute Risers

    NASA Technical Reports Server (NTRS)

    Day, Joshua D.

    2015-01-01

    The reconstruction of twist torque in the Main Parachute Risers of the Capsule Parachute Assembly System (CPAS) has been successfully used to validate CPAS Model Memo conservative twist torque equations. Reconstruction of basic, one degree of freedom drop tests was used to create a functional process for the evaluation of more complex, rigid body simulation. The roll, pitch, and yaw of the body, the fly-out angles of the parachutes, and the relative location of the parachutes to the body are inputs to the torque simulation. The data collected by the Inertial Measurement Unit (IMU) was used to calculate the true torque. The simulation then used photogrammetric and IMU data as inputs into the Model Memo equations. The results were then compared to the true torque results to validate the Model Memo equations. The Model Memo parameters were based off of steel risers and the parameters will need to be re-evaluated for different materials. Photogrammetric data was found to be more accurate than the inertial data in accounting for the relative rotation between payload and cluster. The Model Memo equations were generally a good match and when not matching were generally conservative.

  20. Learner Involvement and Comprehensible Input.

    ERIC Educational Resources Information Center

    Tsui, Amy B. M.

    1991-01-01

    Studies on comprehensible input generally emphasize how input is made comprehensible to the nonnative speaker by examining native speaker speech or teacher talk in the classroom. This paper uses Hong Kong secondary school data to show that only when modification devices involve learner participation do they serve as indicators of comprehensible…

  1. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  2. Effect of the Potential Shape on the Stochastic Resonance Processes

    NASA Astrophysics Data System (ADS)

    Kenmoé, G. Djuidjé; Ngouongo, Y. J. Wadop; Kofané, T. C.

    2015-10-01

    The stochastic resonance (SR) induced by periodic signal and white noises in a periodic nonsinusoidal potential is investigated. This phenomenon is studied as a function of the friction coefficient as well as the shape of the potential. It is done through an investigation of the hysteresis loop area which is equivalent to the input energy lost by the system to the environment per period of the external force. SR is evident in some range of the shape parameter of the potential, but cannot be observed in the other range. Specially, variation of the shape potential affects significantly and not trivially the heigh of the potential barrier in the Kramers rate as well as the occurrence of SR. The finding results show crucial dependence of the temperature of occurrence of SR on the shape of the potential. It is noted that the maximum of the input energy generally decreases when the friction coefficient is increased.

  3. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE PAGES

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto; ...

    2017-09-15

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  4. ELSI: A unified software interface for Kohn-Sham electronic structure solvers

    NASA Astrophysics Data System (ADS)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; García, Alberto; Huhn, William P.; Jacquelin, Mathias; Jia, Weile; Lange, Björn; Lin, Lin; Lu, Jianfeng; Mi, Wenhui; Seifitokaldani, Ali; Vázquez-Mayagoitia, Álvaro; Yang, Chao; Yang, Haizhao; Blum, Volker

    2018-01-01

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aims to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. Comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.

  5. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  6. Ex vivo dissection of optogenetically activated mPFC and hippocampal inputs to neurons in the basolateral amygdala: implications for fear and emotional memory

    PubMed Central

    Hübner, Cora; Bosch, Daniel; Gall, Andrea; Lüthi, Andreas; Ehrlich, Ingrid

    2014-01-01

    Many lines of evidence suggest that a reciprocally interconnected network comprising the amygdala, ventral hippocampus (vHC), and medial prefrontal cortex (mPFC) participates in different aspects of the acquisition and extinction of conditioned fear responses and fear behavior. This could at least in part be mediated by direct connections from mPFC or vHC to amygdala to control amygdala activity and output. However, currently the interactions between mPFC and vHC afferents and their specific targets in the amygdala are still poorly understood. Here, we use an ex-vivo optogenetic approach to dissect synaptic properties of inputs from mPFC and vHC to defined neuronal populations in the basal amygdala (BA), the area that we identify as a major target of these projections. We find that BA principal neurons (PNs) and local BA interneurons (INs) receive monosynaptic excitatory inputs from mPFC and vHC. In addition, both these inputs also recruit GABAergic feedforward inhibition in a substantial fraction of PNs, in some neurons this also comprises a slow GABAB-component. Amongst the innervated PNs we identify neurons that project back to subregions of the mPFC, indicating a loop between neurons in mPFC and BA, and a pathway from vHC to mPFC via BA. Interestingly, mPFC inputs also recruit feedforward inhibition in a fraction of INs, suggesting that these inputs can activate dis-inhibitory circuits in the BA. A general feature of both mPFC and vHC inputs to local INs is that excitatory inputs display faster rise and decay kinetics than in PNs, which would enable temporally precise signaling. However, mPFC and vHC inputs to both PNs and INs differ in their presynaptic release properties, in that vHC inputs are more depressing. In summary, our data describe novel wiring, and features of synaptic connections from mPFC and vHC to amygdala that could help to interpret functions of these interconnected brain areas at the network level. PMID:24634648

  7. Aircraft signal definition for flight safety system monitoring system

    NASA Technical Reports Server (NTRS)

    Gibbs, Michael (Inventor); Omen, Debi Van (Inventor)

    2003-01-01

    A system and method compares combinations of vehicle variable values against known combinations of potentially dangerous vehicle input signal values. Alarms and error messages are selectively generated based on such comparisons. An aircraft signal definition is provided to enable definition and monitoring of sets of aircraft input signals to customize such signals for different aircraft. The input signals are compared against known combinations of potentially dangerous values by operational software and hardware of a monitoring function. The aircraft signal definition is created using a text editor or custom application. A compiler receives the aircraft signal definition to generate a binary file that comprises the definition of all the input signals used by the monitoring function. The binary file also contains logic that specifies how the inputs are to be interpreted. The file is then loaded into the monitor function, where it is validated and used to continuously monitor the condition of the aircraft.

  8. Genetic inhibition of neurotransmission reveals role of glutamatergic input to dopamine neurons in high-effort behavior.

    PubMed

    Hutchison, M A; Gu, X; Adrover, M F; Lee, M R; Hnasko, T S; Alvarez, V A; Lu, W

    2018-05-01

    Midbrain dopamine neurons are crucial for many behavioral and cognitive functions. As the major excitatory input, glutamatergic afferents are important for control of the activity and plasticity of dopamine neurons. However, the role of glutamatergic input as a whole onto dopamine neurons remains unclear. Here we developed a mouse line in which glutamatergic inputs onto dopamine neurons are specifically impaired, and utilized this genetic model to directly test the role of glutamatergic inputs in dopamine-related functions. We found that while motor coordination and reward learning were largely unchanged, these animals showed prominent deficits in effort-related behavioral tasks. These results provide genetic evidence that glutamatergic transmission onto dopaminergic neurons underlies incentive motivation, a willingness to exert high levels of effort to obtain reinforcers, and have important implications for understanding the normal function of the midbrain dopamine system.

  9. Estimation of arterial input by a noninvasive image derived method in brain H2 15O PET study: confirmation of arterial location using MR angiography

    NASA Astrophysics Data System (ADS)

    Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko

    2017-06-01

    A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92  ±  0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.

  10. A general equilibrium model of ecosystem services in a river basin

    Treesearch

    Travis Warziniack

    2014-01-01

    This study builds a general equilibrium model of ecosystem services, with sectors of the economy competing for use of the environment. The model recognizes that production processes in the real world require a combination of natural and human inputs, and understanding the value of these inputs and their competing uses is necessary when considering policies of resource...

  11. Method for ultrafast optical deflection enabling optical recording via serrated or graded light illumination

    DOEpatents

    Heebner, John E [Livermore, CA

    2009-09-08

    In one general embodiment, a method for deflecting an optical signal input into a waveguide is provided. In operation, an optical input signal is propagated through a waveguide. Additionally, an optical control signal is applied to a mask positioned relative to the waveguide such that the application of the optical control signal to the mask is used to influence the optical input signal propagating in the waveguide. Furthermore, the deflected optical input signal output from the waveguide is detected in parallel on an array of detectors. In another general embodiment, a beam deflecting structure is provided for deflecting an optical signal input into a waveguide, the structure comprising at least one wave guiding layer for guiding an optical input signal and at least one masking layer including a pattern configured to influence characteristics of a material of the guiding layer when an optical control signal is passed through the masking layer in a direction of the guiding layer. In another general embodiment, a system is provided including a waveguide, an attenuating mask positioned on the waveguide, and an optical control source positioned to propagate pulsed laser light towards the attenuating mask and the waveguide such that a pattern of the attenuating mask is applied to the waveguide and material properties of at least a portion of the waveguide are influenced.

  12. Chemical sensors are hybrid-input memristors

    NASA Astrophysics Data System (ADS)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  13. Self-Tuning of Design Variables for Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Lin, Chaung; Juang, Jer-Nan

    2000-01-01

    Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.

  14. Parametric study of power absorption from electromagnetic waves by small ferrite spheres

    NASA Technical Reports Server (NTRS)

    Englert, Gerald W.

    1989-01-01

    Algebraic expressions in terms of elementary mathematical functions are derived for power absorption and dissipation by eddy currents and magnetic hysteresis in ferrite spheres. Skin depth is determined by using a variable inner radius in descriptive integral equations. Numerical results are presented for sphere diameters less than one wavelength. A generalized power absorption parameter for both eddy currents and hysteresis is expressed in terms of the independent parameters involving wave frequency, sphere radius, resistivity, and complex permeability. In general, the hysteresis phenomenon has a greater sensitivity to these independent parameters than do eddy currents over the ranges of independent parameters studied herein. Working curves are presented for obtaining power losses from input to the independent parameters.

  15. A grid spacing control technique for algebraic grid generation methods

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Kudlinski, R. A.; Everton, E. L.

    1982-01-01

    A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.

  16. Incorporating spatial context into statistical classification of multidimensional image data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.

    1981-01-01

    Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.

  17. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  18. Climate and the equilibrium state of land surface hydrology parameterizations

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.

  19. Single-temperature quantum engine without feedback control.

    PubMed

    Yi, Juyeon; Talkner, Peter; Kim, Yong Woon

    2017-08-01

    A cyclically working quantum-mechanical engine that operates at a single temperature is proposed. Its energy input is delivered by a quantum measurement. The functioning of the engine does not require any feedback control. We analyze work, heat, and the efficiency of the engine for the case of a working substance that is governed by the laws of quantum mechanics and that can be adiabatically compressed and expanded. The obtained general expressions are exemplified for a spin in an adiabatically changing magnetic field and a particle moving in a potential with slowly changing shape.

  20. Spectral analysis for nonstationary and nonlinear systems: a discrete-time-model-based approach.

    PubMed

    He, Fei; Billings, Stephen A; Wei, Hua-Liang; Sarrigiannis, Ptolemaios G; Zhao, Yifan

    2013-08-01

    A new frequency-domain analysis framework for nonlinear time-varying systems is introduced based on parametric time-varying nonlinear autoregressive with exogenous input models. It is shown how the time-varying effects can be mapped to the generalized frequency response functions (FRFs) to track nonlinear features in frequency, such as intermodulation and energy transfer effects. A new mapping to the nonlinear output FRF is also introduced. A simulated example and the application to intracranial electroencephalogram data are used to illustrate the theoretical results.

  1. Joint statistics of strongly correlated neurons via dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  2. The fraction of AGNs in major merger galaxies and its luminosity dependence

    NASA Astrophysics Data System (ADS)

    Weigel, Anna K.; Schawinski, Kevin; Treister, Ezequiel; Trakhtenbrot, Benny; Sanders, David B.

    2018-05-01

    We use a phenomenological model which connects the galaxy and active galactic nucleus (AGN) populations to investigate the process of AGNs triggering through major galaxy mergers at z ˜ 0. The model uses stellar mass functions as input and allows the prediction of AGN luminosity functions based on assumed Eddington ratio distribution functions (ERDFs). We show that the number of AGNs hosted by merger galaxies relative to the total number of AGNs increases as a function of AGN luminosity. This is due to more massive galaxies being more likely to undergo a merger and does not require the assumption that mergers lead to higher Eddington ratios than secular processes. Our qualitative analysis also shows that to match the observations, the probability of a merger galaxy hosting an AGN and accreting at a given Eddington value has to be increased by a factor ˜10 relative to the general AGN population. An additional significant increase of the fraction of high Eddington ratio AGNs among merger host galaxies leads to inconsistency with the observed X-ray luminosity function. Physically our results imply that, compared to the general galaxy population, the AGN fraction among merger galaxies is ˜10 times higher. On average, merger triggering does however not lead to significantly higher Eddington ratios.

  3. Functional model of biological neural networks.

    PubMed

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  4. Comparing fixed and variable-width Gaussian networks.

    PubMed

    Kůrková, Věra; Kainen, Paul C

    2014-09-01

    The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs.

    PubMed

    Ledoux, Erwan; Brunel, Nicolas

    2011-01-01

    We investigate the dynamics of recurrent networks of excitatory (E) and inhibitory (I) neurons in the presence of time-dependent inputs. The dynamics is characterized by the network dynamical transfer function, i.e., how the population firing rate is modulated by sinusoidal inputs at arbitrary frequencies. Two types of networks are studied and compared: (i) a Wilson-Cowan type firing rate model; and (ii) a fully connected network of leaky integrate-and-fire (LIF) neurons, in a strong noise regime. We first characterize the region of stability of the "asynchronous state" (a state in which population activity is constant in time when external inputs are constant) in the space of parameters characterizing the connectivity of the network. We then systematically characterize the qualitative behaviors of the dynamical transfer function, as a function of the connectivity. We find that the transfer function can be either low-pass, or with a single or double resonance, depending on the connection strengths and synaptic time constants. Resonances appear when the system is close to Hopf bifurcations, that can be induced by two separate mechanisms: the I-I connectivity and the E-I connectivity. Double resonances can appear when excitatory delays are larger than inhibitory delays, due to the fact that two distinct instabilities exist with a finite gap between the corresponding frequencies. In networks of LIF neurons, changes in external inputs and external noise are shown to be able to change qualitatively the network transfer function. Firing rate models are shown to exhibit the same diversity of transfer functions as the LIF network, provided delays are present. They can also exhibit input-dependent changes of the transfer function, provided a suitable static non-linearity is incorporated.

  6. Root controls on soil microbial community structure in forest soils.

    PubMed

    Brant, Justin B; Myrold, David D; Sulzman, Elizabeth W

    2006-07-01

    We assessed microbial community composition as a function of altered above- and belowground inputs to soil in forest ecosystems of Oregon, Pennsylvania, and Hungary as part of a larger Detritus Input and Removal Treatment (DIRT) experiment. DIRT plots, which include root trenching, aboveground litter exclusion, and doubling of litter inputs, have been established in forested ecosystems in the US and Europe that vary with respect to dominant tree species, soil C content, N deposition rate, and soil type. This study used phospholipid fatty-acid (PLFA) analysis to examine changes in the soil microbial community size and composition in the mineral soil (0-10 cm) as a result of the DIRT treatments. At all sites, the PLFA profiles from the plots without roots were significantly different from all other treatments. PLFA analysis showed that the rootless plots generally contained larger quantities of actinomycete biomarkers and lower amounts of fungal biomarkers. At one of the sites in an old-growth coniferous forest, seasonal changes in PLFA profiles were also examined. Seasonal differences in soil microbial community composition were greater than treatment differences. Throughout the year, treatments without roots continued to have a different microbial community composition than the treatments with roots, although the specific PLFA biomarkers responsible for these differences varied by season. These data provide direct evidence that root C inputs exert a large control on microbial community composition in the three forested ecosystems studied.

  7. Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland

    1998-01-01

    Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.

  8. Functional recovery of odor representations in regenerated sensory inputs to the olfactory bulb

    PubMed Central

    Cheung, Man C.; Jang, Woochan; Schwob, James E.; Wachowiak, Matt

    2014-01-01

    The olfactory system has a unique capacity for recovery from peripheral damage. After injury to the olfactory epithelium (OE), olfactory sensory neurons (OSNs) regenerate and re-converge on target glomeruli of the olfactory bulb (OB). Thus far, this process has been described anatomically for only a few defined populations of OSNs. Here we characterize this regeneration at a functional level by assessing how odor representations carried by OSN inputs to the OB recover after massive loss and regeneration of the sensory neuron population. We used chronic imaging of mice expressing synaptopHluorin in OSNs to monitor odor representations in the dorsal OB before lesion by the olfactotoxin methyl bromide and after a 12 week recovery period. Methyl bromide eliminated functional inputs to the OB, and these inputs recovered to near-normal levels of response magnitude within 12 weeks. We also found that the functional topography of odor representations recovered after lesion, with odorants evoking OSN input to glomerular foci within the same functional domains as before lesion. At a finer spatial scale, however, we found evidence for mistargeting of regenerated OSN axons onto OB targets, with odorants evoking synaptopHluorin signals in small foci that did not conform to a typical glomerular structure but whose distribution was nonetheless odorant-specific. These results indicate that OSNs have a robust ability to reestablish functional inputs to the OB and that the mechanisms underlying the topography of bulbar reinnervation during development persist in the adult and allow primary sensory representations to be largely restored after massive sensory neuron loss. PMID:24431990

  9. Application of SIGGS to Project PRIME: A General Systems Approach to Evaluation of Mainstreaming.

    ERIC Educational Resources Information Center

    Frick, Ted

    The use of the systems approach in educational inquiry is not new, and the models of input/output, input/process/product, and cybernetic systems have been widely used. The general systems model is an extension of all these, adding the dimension of environmental influence on the system as well as system influence on the environment. However, if the…

  10. Generalized Flip-Flop Input Equations Based on a Four-Valued Boolean Algebra

    NASA Technical Reports Server (NTRS)

    Tucker, Jerry H.; Tapia, Moiez A.

    1996-01-01

    A procedure is developed for obtaining generalized flip-flop input equations, and a concise method is presented for representing these equations. The procedure is based on solving a four-valued characteristic equation of the flip-flop, and can encompass flip-flops that are too complex to approach intuitively. The technique is presented using Karnaugh maps, but could easily be implemented in software.

  11. Imaginary-frequency polarizability and van der Waals force constants of two-electron atoms, with rigorous bounds

    NASA Technical Reports Server (NTRS)

    Glover, R. M.; Weinhold, F.

    1977-01-01

    Variational functionals of Braunn and Rebane (1972) for the imagery-frequency polarizability (IFP) have been generalized by the method of Gramian inequalities to give rigorous upper and lower bounds, valid even when the true (but unknown) unperturbed wavefunction must be represented by a variational approximation. Using these formulas in conjunction with flexible variational trial functions, tight error bounds are computed for the IFP and the associated two- and three-body van der Waals interaction constants of the ground 1(1S) and metastable 2(1,3S) states of He and Li(+). These bounds generally establish the ground-state properties to within a fraction of a per cent and metastable properties to within a few per cent, permitting a comparative assessment of competing theoretical methods at this level of accuracy. Unlike previous 'error bounds' for these properties, the present results have a completely a priori theoretical character, with no empirical input data.

  12. On compensatory strategies and computational models: the case of pure alexia.

    PubMed

    Shallice, Tim

    2014-01-01

    The article is concerned with inferences from the behaviour of neurological patients to models of normal function. It takes the letter-by-letter reading strategy common in pure alexic patients as an example of the methodological problems involved in making such inferences that compensatory strategies produce. The evidence is discussed on the possible use of three ways the letter-by-letter reading process might operate: "reversed spelling"; the use of the phonological input buffer as a temporary holding store during word building; and the use of serial input to the visual word-form system entirely within the visual-orthographic domain such as in the model of Plaut [1999. A connectionist approach to word reading and acquired dyslexia: Extension to sequential processing. Cognitive Science, 23, 543-568]. The compensatory strategy used by, at least, one pure alexic patient does not fit with the third of these possibilities. On the more general question, it is argued that even if compensatory strategies are being used, the behaviour of neurological patients can be useful for the development and assessment of first-generation information-processing models of normal function, but they are not likely to be useful for the development and assessment of second-generation computational models.

  13. Transient times in linear metabolic pathways under constant affinity constraints.

    PubMed

    Lloréns, M; Nuño, J C; Montero, F

    1997-10-15

    In the early seventies, Easterby began the analytical study of transition times for linear reaction schemes [Easterby (1973) Biochim. Biophys. Acta 293, 552-558]. In this pioneer work and in subsequent papers, a state function (the transient time) was used to measure the period before the stationary state, for systems constrained to work under both constant and variable input flux, was reached. Despite the undoubted usefulness of this quantity to describe the time-dependent features of these kinds of systems, its application to the study of chemical reactions under other constraints is questionable. In the present work, a generalization of these magnitudes to linear metabolic pathways functioning under a constant-affinity constraint is carried out. It is proved that classical definitions of transient times do not reflect the actual properties of the transition to the steady state in systems evolving under this restriction. Alternatively, a more adequate framework for interpretation of the transient times for systems with both constant and variable input flux is suggested. Within this context, new definitions that reflect more accurately the transient characteristics of constant affinity systems are stated. Finally, the meaning of these transient times is discussed.

  14. Gain Modulation in the Central Nervous System: Where Behavior, Neurophysiology, and Computation Meet

    PubMed Central

    SALINAS, EMILIO; SEJNOWSKI, TERRENCE J.

    2010-01-01

    Gain modulation is a nonlinear way in which neurons combine information from two (or more) sources, which may be of sensory, motor, or cognitive origin. Gain modulation is revealed when one input, the modulatory one, affects the gain or the sensitivity of the neuron to the other input, without modifying its selectivity or receptive field properties. This type of modulatory interaction is important for two reasons. First, it is an extremely widespread integration mechanism; it is found in a plethora of cortical areas and in some subcortical structures as well, and as a consequence it seems to play an important role in a striking variety of functions, including eye and limb movements, navigation, spatial perception, attentional processing, and object recognition. Second, there is a theoretical foundation indicating that gain-modulated neurons may serve as a basis for a general class of computations, namely, coordinate transformations and the generation of invariant responses, which indeed may underlie all the brain functions just mentioned. This article describes the relationships between computational models, the physiological properties of a variety of gain-modulated neurons, and some of the behavioral consequences of damage to gain-modulated neural representations. PMID:11597102

  15. 7 CFR 3430.607 - Stakeholder input.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... ASSISTANCE PROGRAMS-GENERAL AWARD ADMINISTRATIVE PROVISIONS Beginning Farmer and Rancher Development Program... (e.g., public meetings, request for input and/or via Web site), as well as through a notice in the...

  16. A connectionist model of category learning by individuals with high-functioning autism spectrum disorder.

    PubMed

    Dovgopoly, Alexander; Mercado, Eduardo

    2013-06-01

    Individuals with autism spectrum disorder (ASD) show atypical patterns of learning and generalization. We explored the possible impacts of autism-related neural abnormalities on perceptual category learning using a neural network model of visual cortical processing. When applied to experiments in which children or adults were trained to classify complex two-dimensional images, the model can account for atypical patterns of perceptual generalization. This is only possible, however, when individual differences in learning are taken into account. In particular, analyses performed with a self-organizing map suggested that individuals with high-functioning ASD show two distinct generalization patterns: one that is comparable to typical patterns, and a second in which there is almost no generalization. The model leads to novel predictions about how individuals will generalize when trained with simplified input sets and can explain why some researchers have failed to detect learning or generalization deficits in prior studies of category learning by individuals with autism. On the basis of these simulations, we propose that deficits in basic neural plasticity mechanisms may be sufficient to account for the atypical patterns of perceptual category learning and generalization associated with autism, but they do not account for why only a subset of individuals with autism would show such deficits. If variations in performance across subgroups reflect heterogeneous neural abnormalities, then future behavioral and neuroimaging studies of individuals with ASD will need to account for such disparities.

  17. Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.

    PubMed

    Gilson, Matthieu

    2018-04-01

    Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.

  18. Mathematical prediction of core body temperature from environment, activity, and clothing: The heat strain decision aid (HSDA).

    PubMed

    Potter, Adam W; Blanchard, Laurie A; Friedl, Karl E; Cadarette, Bruce S; Hoyt, Reed W

    2017-02-01

    Physiological models provide useful summaries of complex interrelated regulatory functions. These can often be reduced to simple input requirements and simple predictions for pragmatic applications. This paper demonstrates this modeling efficiency by tracing the development of one such simple model, the Heat Strain Decision Aid (HSDA), originally developed to address Army needs. The HSDA, which derives from the Givoni-Goldman equilibrium body core temperature prediction model, uses 16 inputs from four elements: individual characteristics, physical activity, clothing biophysics, and environmental conditions. These inputs are used to mathematically predict core temperature (T c ) rise over time and can estimate water turnover from sweat loss. Based on a history of military applications such as derivation of training and mission planning tools, we conclude that the HSDA model is a robust integration of physiological rules that can guide a variety of useful predictions. The HSDA model is limited to generalized predictions of thermal strain and does not provide individualized predictions that could be obtained from physiological sensor data-driven predictive models. This fully transparent physiological model should be improved and extended with new findings and new challenging scenarios. Published by Elsevier Ltd.

  19. Training a molecular automaton to play a game

    NASA Astrophysics Data System (ADS)

    Pei, Renjun; Matamoros, Elizabeth; Liu, Manhong; Stefanovic, Darko; Stojanovic, Milan N.

    2010-11-01

    Research at the interface between chemistry and cybernetics has led to reports of `programmable molecules', but what does it mean to say `we programmed a set of solution-phase molecules to do X'? A survey of recently implemented solution-phase circuitry indicates that this statement could be replaced with `we pre-mixed a set of molecules to do X and functional subsets of X'. These hard-wired mixtures are then exposed to a set of molecular inputs, which can be interpreted as being keyed to human moves in a game, or as assertions of logical propositions. In nucleic acids-based systems, stemming from DNA computation, these inputs can be seen as generic oligonucleotides. Here, we report using reconfigurable nucleic acid catalyst-based units to build a multipurpose reprogrammable molecular automaton that goes beyond single-purpose `hard-wired' molecular automata. The automaton covers all possible responses to two consecutive sets of four inputs (such as four first and four second moves for a generic set of trivial two-player two-move games). This is a model system for more general molecular field programmable gate array (FPGA)-like devices that can be programmed by example, which means that the operator need not have any knowledge of molecular computing methods.

  20. Dynamics of the mental health workforce: investigating the composition of physicians and other health providers.

    PubMed

    Stefos, Theodore; Burgess, James F; Cohen, Jeffrey P; Lehner, Laura; Moran, Eileen

    2012-12-01

    We evaluate how changes to mental health workforce levels, composition, and degree of labor substitution, may impact typical practice output. Using a generalized Leontief production function and data from 134 U.S. Department of Veterans Affairs (VA) mental health practices, we estimate the q-complementarity/q-substitutability of mental health workers. We look at the entire spectrum of mental health services rather than just outpatient or physician office services. We also examine more labor types, including residents, than previous studies. The marginal patient care output contribution is estimated for each labor type as well as the degree to which physicians and other mental health workers may be substitutes or complements. Results indicate that numerous channels exist through which input substitution can improve productivity. Seven of eight labor and capital inputs have positive estimated marginal products. Most factor inputs exhibit diminishing marginal productivity. Of 28 unique labor-capital pairs, 17 are q-complements and 11 are q-substitutes. Complementarity among several labor types provides evidence of a team approach to mental health service provision. Our approach may serve to better inform healthcare providers regarding more productive mental health workforce composition both in and outside of VA.

  1. Training a molecular automaton to play a game.

    PubMed

    Pei, Renjun; Matamoros, Elizabeth; Liu, Manhong; Stefanovic, Darko; Stojanovic, Milan N

    2010-11-01

    Research at the interface between chemistry and cybernetics has led to reports of 'programmable molecules', but what does it mean to say 'we programmed a set of solution-phase molecules to do X'? A survey of recently implemented solution-phase circuitry indicates that this statement could be replaced with 'we pre-mixed a set of molecules to do X and functional subsets of X'. These hard-wired mixtures are then exposed to a set of molecular inputs, which can be interpreted as being keyed to human moves in a game, or as assertions of logical propositions. In nucleic acids-based systems, stemming from DNA computation, these inputs can be seen as generic oligonucleotides. Here, we report using reconfigurable nucleic acid catalyst-based units to build a multipurpose reprogrammable molecular automaton that goes beyond single-purpose 'hard-wired' molecular automata. The automaton covers all possible responses to two consecutive sets of four inputs (such as four first and four second moves for a generic set of trivial two-player two-move games). This is a model system for more general molecular field programmable gate array (FPGA)-like devices that can be programmed by example, which means that the operator need not have any knowledge of molecular computing methods.

  2. Detailed requirements document for the Interactive Financial Management System (IFMS), volume 1

    NASA Technical Reports Server (NTRS)

    Dodson, D. B.

    1975-01-01

    The detailed requirements for phase 1 (online fund control, subauthorization accounting, and accounts receivable functional capabilities) of the Interactive Financial Management System (IFMS) are described. This includes information on the following: systems requirements, performance requirements, test requirements, and production implementation. Most of the work is centered on systems requirements, and includes discussions on the following processes: resources authority, allotment, primary work authorization, reimbursable order acceptance, purchase request, obligation, cost accrual, cost distribution, disbursement, subauthorization performance, travel, accounts receivable, payroll, property, edit table maintenance, end-of-year, backup input. Other subjects covered include: external systems interfaces, general inquiries, general report requirements, communication requirements, and miscellaneous. Subjects covered under performance requirements include: response time, processing volumes, system reliability, and accuracy. Under test requirements come test data sources, general test approach, and acceptance criteria. Under production implementation come data base establishment, operational stages, and operational requirements.

  3. On the asymptotic evolution of finite energy Airy wave functions.

    PubMed

    Chamorro-Posada, P; Sánchez-Curto, J; Aceves, A B; McDonald, G S

    2015-06-15

    In general, there is an inverse relation between the degree of localization of a wave function of a certain class and its transform representation dictated by the scaling property of the Fourier transform. We report that in the case of finite energy Airy wave packets a simultaneous increase in their localization in the direct and transform domains can be obtained as the apodization parameter is varied. One consequence of this is that the far-field diffraction rate of a finite energy Airy beam decreases as the beam localization at the launch plane increases. We analyze the asymptotic properties of finite energy Airy wave functions using the stationary phase method. We obtain one dominant contribution to the long-term evolution that admits a Gaussian-like approximation, which displays the expected reduction of its broadening rate as the input localization is increased.

  4. Influence of ventral tegmental area input on cortico-subcortical networks underlying action control and decision making.

    PubMed

    Richter, Anja; Gruber, Oliver

    2018-02-01

    It is argued that the mesolimbic system has a more general function in processing all salient events, including and extending beyond rewards. Saliency was defined as an event that is unexpected due to its frequency of occurrence and elicits an attentional-behavioral switch. Using functional magnetic resonance imaging (fMRI), signals were measured in response to the modulation of salience of rewarding and nonrewarding events during a reward-based decision making task, the so called desire-reason dilemma paradigm (DRD). Replicating previous findings, both frequent and infrequent, and therefore salient, reward stimuli elicited reliable activation of the ventral tegmental area (VTA) and ventral striatum (vStr). When immediate reward desiring contradicted the superordinate task-goal, we found an increased activation of the VTA and vStr when the salient reward stimuli were presented compared to the nonsalient reward stimuli, indicating a boosting of activation in these brain regions. Furthermore, we found a significantly increased functional connectivity between the VTA and vStr, confirming the boosting of vStr activation via VTA input. Moreover, saliency per se without a reward association led to an increased activation of brain regions in the mesolimbic reward system as well as the orbitofrontal cortex (OFC), inferior frontal gyrus (IFG), and anterior cingulate cortex (ACC). Finally, findings uncovered multiple increased functional interactions between cortical saliency-processing brain areas and the VTA and vStr underlying detection and processing of salient events and adaptive decision making. © 2017 Wiley Periodicals, Inc.

  5. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-08-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  6. PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.

    2018-03-01

    The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.

  7. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  8. Leveraging Environmental Correlations: The Thermodynamics of Requisite Variety

    NASA Astrophysics Data System (ADS)

    Boyd, Alexander B.; Mandal, Dibyendu; Crutchfield, James P.

    2017-06-01

    Key to biological success, the requisite variety that confronts an adaptive organism is the set of detectable, accessible, and controllable states in its environment. We analyze its role in the thermodynamic functioning of information ratchets—a form of autonomous Maxwellian Demon capable of exploiting fluctuations in an external information reservoir to harvest useful work from a thermal bath. This establishes a quantitative paradigm for understanding how adaptive agents leverage structured thermal environments for their own thermodynamic benefit. General ratchets behave as memoryful communication channels, interacting with their environment sequentially and storing results to an output. The bulk of thermal ratchets analyzed to date, however, assume memoryless environments that generate input signals without temporal correlations. Employing computational mechanics and a new information-processing Second Law of Thermodynamics (IPSL) we remove these restrictions, analyzing general finite-state ratchets interacting with structured environments that generate correlated input signals. On the one hand, we demonstrate that a ratchet need not have memory to exploit an uncorrelated environment. On the other, and more appropriate to biological adaptation, we show that a ratchet must have memory to most effectively leverage structure and correlation in its environment. The lesson is that to optimally harvest work a ratchet's memory must reflect the input generator's memory. Finally, we investigate achieving the IPSL bounds on the amount of work a ratchet can extract from its environment, discovering that finite-state, optimal ratchets are unable to reach these bounds. In contrast, we show that infinite-state ratchets can go well beyond these bounds by utilizing their own infinite "negentropy". We conclude with an outline of the collective thermodynamics of information-ratchet swarms.

  9. Influence of Visual Prism Adaptation on Auditory Space Representation.

    PubMed

    Pochopien, Klaudia; Fahle, Manfred

    2017-01-01

    Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.

  10. Jump resonant frequency islands in nonlinear feedback control systems

    NASA Technical Reports Server (NTRS)

    Koenigsberg, W. D.; Dunn, J. C.

    1975-01-01

    A new type of jump resonance is predicted and observed in certain nonlinear feedback control systems. The new jump resonance characteristic is described as a 'frequency island' due to the fact that a portion of the input-output transfer characteristic is disjoint from the main body. The presence of such frequency islands was predicted by using a sinusoidal describing function characterization of the dynamics of an inertial gyro employing nonlinear ternary rebalance logic. While the general conditions under which such islands are possible has not been examined, a numerical approach is presented which can aid in establishing their presence. The existence of the frequency islands predicted for the ternary rebalanced gyro was confirmed by simulating the nonlinear system and measuring the transfer function.

  11. Robust Algorithms for on Minor-Free Graphs Based on the Sherali-Adams Hierarchy

    NASA Astrophysics Data System (ADS)

    Magen, Avner; Moharrami, Mohammad

    This work provides a Linear Programming-based Polynomial Time Approximation Scheme (PTAS) for two classical NP-hard problems on graphs when the input graph is guaranteed to be planar, or more generally Minor Free. The algorithm applies a sufficiently large number (some function of when approximation is required) of rounds of the so-called Sherali-Adams Lift-and-Project system. needed to obtain a -approximation, where f is some function that depends only on the graph that should be avoided as a minor. The problem we discuss are the well-studied problems, the and problems. An curious fact we expose is that in the world of minor-free graph, the is harder in some sense than the.

  12. Neurophysiological Basis of Sleep’s Function on Memory and Cognition

    PubMed Central

    Spencer, Rebecca M. C.

    2013-01-01

    A wealth of recent studies support a function of sleep on memory and cognitive processing. At a physiological level, sleep supports memory in a number of ways including neural replay and enhanced plasticity in the context of reduced ongoing input. This paper presents behavioral evidence for sleep’s role in selective remembering and forgetting of declarative memories, in generalization of these memories, and in motor skill consolidation. Recent physiological data reviewed suggests how these behavioral changes might be supported by sleep. Importantly, in reviewing these findings, an integrated view of how distinct sleep stages uniquely contribute to memory processing emerges. This model will be useful in developing future behavioral and physiological studies to test predictions that emerge. PMID:24600607

  13. Reference manual for the POISSON/SUPERFISH Group of Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finitemore » number of points on a mesh in the plane.« less

  14. Impact Response Characteristics of Polymeric Materials

    DTIC Science & Technology

    1981-11-01

    amplitude-frequency domain. In the language of signal communications an input signal given by some time dependence FAt) is introduced into a " channel ...fixed and not altered by the signal. The channel can be charac- terized by its own function H(t), called the transfer function. This concept can be...rcpresented schematically as follows: Input Signal - [ Channel ] -- Output Signal At) H(t) G(t) In our case the input signal is the impact event, the output

  15. The human motor neuron pools receive a dominant slow‐varying common synaptic input

    PubMed Central

    Negro, Francesco; Yavuz, Utku Şükrü

    2016-01-01

    Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459

  16. Arterial input function derived from pairwise correlations between PET-image voxels.

    PubMed

    Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea

    2013-07-01

    A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.

  17. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  18. An exact general remeshing scheme applied to physically conservative voxelization

    DOE PAGES

    Powell, Devon; Abel, Tom

    2015-05-21

    We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal themore » corresponding integral over the output mesh. We refer to this as “physically conservative voxelization.” At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara [48], who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.« less

  19. PLEXOS Input Data Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  20. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    PubMed

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  1. Population-based input function and image-derived input function for [¹¹C](R)-rolipram PET imaging: methodology, validation and application to the study of major depressive disorder.

    PubMed

    Zanotti-Fregonara, Paolo; Hines, Christina S; Zoghbi, Sami S; Liow, Jeih-San; Zhang, Yi; Pike, Victor W; Drevets, Wayne C; Mallinger, Alan G; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B

    2012-11-15

    Quantitative PET studies of neuroreceptor tracers typically require that arterial input function be measured. The aim of this study was to explore the use of a population-based input function (PBIF) and an image-derived input function (IDIF) for [(11)C](R)-rolipram kinetic analysis, with the goal of reducing - and possibly eliminating - the number of arterial blood samples needed to measure parent radioligand concentrations. A PBIF was first generated using [(11)C](R)-rolipram parent time-activity curves from 12 healthy volunteers (Group 1). Both invasive (blood samples) and non-invasive (body weight, body surface area, and lean body mass) scaling methods for PBIF were tested. The scaling method that gave the best estimate of the Logan-V(T) values was then used to determine the test-retest variability of PBIF in Group 1 and then prospectively applied to another population of 25 healthy subjects (Group 2), as well as to a population of 26 patients with major depressive disorder (Group 3). Results were also compared to those obtained with an image-derived input function (IDIF) from the internal carotid artery. In some subjects, we measured arteriovenous differences in [(11)C](R)-rolipram concentration to see whether venous samples could be used instead of arterial samples. Finally, we assessed the ability of IDIF and PBIF to discriminate depressed patients (MDD) and healthy subjects. Arterial blood-scaled PBIF gave better results than any non-invasive scaling technique. Excellent results were obtained when the blood-scaled PBIF was prospectively applied to the subjects in Group 2 (V(T) ratio 1.02±0.05; mean±SD) and Group 3 (V(T) ratio 1.03±0.04). Equally accurate results were obtained for two subpopulations of subjects drawn from Groups 2 and 3 who had very differently shaped (i.e. "flatter" or "steeper") input functions compared to PBIF (V(T) ratio 1.07±0.04 and 0.99±0.04, respectively). Results obtained via PBIF were equivalent to those obtained via IDIF (V(T) ratio 0.99±0.05 and 1.00±0.04 for healthy subjects and MDD patients, respectively). Retest variability of PBIF was equivalent to that obtained with full input function and IDIF (14.5%, 15.2%, and 14.1%, respectively). Due to [(11)C](R)-rolipram arteriovenous differences, venous samples could not be substituted for arterial samples. With both IDIF and PBIF, depressed patients had a 20% reduction in [(11)C](R)-rolipram binding as compared to control (two-way ANOVA: p=0.008 and 0.005, respectively). These results were almost equivalent to those obtained using 23 arterial samples. Although some arterial samples are still necessary, both PBIF and IDIF are accurate and precise alternatives to full arterial input function for [(11)C](R)-rolipram PET studies. Both techniques give accurate results with low variability, even for clinically different groups of subjects and those with very differently shaped input functions. Published by Elsevier Inc.

  2. BIREFRINGENT FILTER MODEL

    NASA Technical Reports Server (NTRS)

    Cross, P. L.

    1994-01-01

    Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.

  3. On non-primitively divergent vertices of Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Huber, Markus Q.

    2017-11-01

    Two correlation functions of Yang-Mills beyond the primitively divergent ones, the two-ghost-two-gluon and the four-ghost vertices, are calculated and their influence on lower vertices is examined. Their full (transverse) tensor structure is taken into account. As input, a solution of the full two-point equations - including two-loop terms - is used that respects the resummed perturbative ultraviolet behavior. A clear hierarchy is found with regard to the color structure that reduces the number of relevant dressing functions. The impact of the two-ghost-two-gluon vertex on the three-gluon vertex is negligible, which is explained by the fact that all non-small dressing functions drop out due to their color factors. Only in the ghost-gluon vertex a small net effect below 2% is seen. The four-ghost vertex is found to be extremely small in general. Since these two four-point functions do not enter into the propagator equations, these findings establish their small overall effect on lower correlation functions.

  4. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  5. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  6. Evaluating the Evidence Surrounding Pontine Cholinergic Involvement in REM Sleep Generation

    PubMed Central

    Grace, Kevin P.; Horner, Richard L.

    2015-01-01

    Rapid eye movement (REM) sleep – characterized by vivid dreaming, motor paralysis, and heightened neural activity – is one of the fundamental states of the mammalian central nervous system. Initial theories of REM sleep generation posited that induction of the state required activation of the “pontine REM sleep generator” by cholinergic inputs. Here, we review and evaluate the evidence surrounding cholinergic involvement in REM sleep generation. We submit that: (i) the capacity of pontine cholinergic neurotransmission to generate REM sleep has been firmly established by gain-of-function experiments, (ii) the function of endogenous cholinergic input to REM sleep generating sites cannot be determined by gain-of-function experiments; rather, loss-of-function studies are required, (iii) loss-of-function studies show that endogenous cholinergic input to the PTF is not required for REM sleep generation, and (iv) cholinergic input to the pontine REM sleep generating sites serve an accessory role in REM sleep generation: reinforcing non-REM-to-REM sleep transitions making them quicker and less likely to fail. PMID:26388832

  7. Observations of the directional distribution of the wind energy input function over swell waves

    NASA Astrophysics Data System (ADS)

    Shabani, Behnam; Babanin, Alex V.; Baldock, Tom E.

    2016-02-01

    Field measurements of wind stress over shallow water swell traveling in different directions relative to the wind are presented. The directional distribution of the measured stresses is used to confirm the previously proposed but unverified directional distribution of the wind energy input function. The observed wind energy input function is found to follow a much narrower distribution (β∝cos⁡3.6θ) than the Plant (1982) cosine distribution. The observation of negative stress angles at large wind-wave angles, however, indicates that the onset of negative wind shearing occurs at about θ≈ 50°, and supports the use of the Snyder et al. (1981) directional distribution. Taking into account the reverse momentum transfer from swell to the wind, Snyder's proposed parameterization is found to perform exceptionally well in explaining the observed narrow directional distribution of the wind energy input function, and predicting the wind drag coefficients. The empirical coefficient (ɛ) in Snyder's parameterization is hypothesised to be a function of the wave shape parameter, with ɛ value increasing as the wave shape changes between sinusoidal, sawtooth, and sharp-crested shoaling waves.

  8. Anesthesia management of surgery for sigmoid perforation and acute peritonitis patient following heart transplantation: case report

    PubMed Central

    Yang, Xu-Li; Dai, Shu-Hong; Zhang, Juan; Zhang, Jing; Liu, Yan-Jun; Yang, Yan; Sun, Yu-E; Ma, Zheng-Liang; Gu, Xiao-Ping

    2015-01-01

    Here we described a case in which a patient underwent emergency laparotomy for acute peritonitis and sigmoid perforation under general anesthesia with a history of heart transplantation. A good knowledge in the physiology of the transplanted heart is critical for effective and safe general anesthesia. We chose etomidate that have a weaker impact on cardiovascular function plus propofol for induction, and propofol plus cisatracurium for maintenance with intermittently analgesics and vasoactive drugs to facilitate the anesthesia. In addition, fluid input, electrolyte and acid-base balance were well adjusted during the whole procedure. The patient was in good condition after the surgery. In this case report we are aiming to provide some guidance for those scheduled for non-cardiac surgery after heart transplant. PMID:26379997

  9. Exact Performance of General Second-Order Processors for Gaussian Inputs

    DTIC Science & Technology

    1983-10-15

    general than the characteristic function considered in [3, eq. 5], which itself required a very lengthy analytic treatment to get the probability...8217 1970 13=2*12 1980 I4 = N/I3 1990 FOR 15=1 TO 12 2000 I6=(:i5-l)*I4+l 2010 IF I6<=H2 THEN 2050 2020 N6 = -CCN4-I6-1 > 2030 N7 = - Ca6 -Nl-i;’ 2040...GOTO 2070 2050 N6= ca6 -i:j 2060 H7=-C(N3-I6-1) 2070 FOR 17=0 TO H-I3 STEP I 2080 18=17+15 2090 19=18+12 2100 N8 = X(I8-n-Xa9-l> 2110 N9 = Ya8-l

  10. Controllers, observers, and applications thereof

    NASA Technical Reports Server (NTRS)

    Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)

    2011-01-01

    Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.

  11. Observer-Based Adaptive NN Control for a Class of Uncertain Nonlinear Systems With Nonsymmetric Input Saturation.

    PubMed

    Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang

    2017-07-01

    This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.

  12. How the type of input function affects the dynamic response of conducting polymer actuators

    NASA Astrophysics Data System (ADS)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-10-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators.

  13. Automated manual transmission clutch controller

    DOEpatents

    Lawrie, Robert E.; Reed, Jr., Richard G.; Rausen, David J.

    1999-11-30

    A powertrain system for a hybrid vehicle. The hybrid vehicle includes a heat engine, such as a diesel engine, and an electric machine, which operates as both an electric motor and an alternator, to power the vehicle. The hybrid vehicle also includes a manual-style transmission configured to operate as an automatic transmission from the perspective of the driver. The engine and the electric machine drive an input shaft which in turn drives an output shaft of the transmission. In addition to driving the transmission, the electric machine regulates the speed of the input shaft in order to synchronize the input shaft during either an upshift or downshift of the transmission by either decreasing or increasing the speed of the input shaft. When decreasing the speed of the input shaft, the electric motor functions as an alternator to produce electrical energy which may be stored by a storage device. Operation of the transmission is controlled by a transmission controller which receives input signals and generates output signals to control shift and clutch motors to effect smooth launch, upshift shifts, and downshifts of the transmission, so that the transmission functions substantially as an automatic transmission from the perspective of the driver, while internally substantially functioning as a manual transmission.

  14. Automated manual transmission shift sequence controller

    DOEpatents

    Lawrie, Robert E.; Reed, Richard G.; Rausen, David J.

    2000-02-01

    A powertrain system for a hybrid vehicle. The hybrid vehicle includes a heat engine, such as a diesel engine, and an electric machine, which operates as both, an electric motor and an alternator, to power the vehicle. The hybrid vehicle also includes a manual-style transmission configured to operate as an automatic transmission from the perspective of the driver. The engine and the electric machine drive an input shaft which in turn drives an output shaft of the transmission. In addition to driving the transmission, the electric machine regulates the speed of the input shaft in order to synchronize the input shaft during either an upshift or downshift of the transmission by either decreasing or increasing the speed of the input shaft. When decreasing the speed of the input shaft, the electric motor functions as an alternator to produce electrical energy which may be stored by a storage device. Operation of the transmission is controlled by a transmission controller which receives input signals and generates output signals to control shift and clutch motors to effect smooth launch, upshift shifts, and downshifts of the transmission, so that the transmission functions substantially as an automatic transmission from the perspective of the driver, while internally substantially functioning as a manual transmission.

  15. Automated manual transmission mode selection controller

    DOEpatents

    Lawrie, Robert E.

    1999-11-09

    A powertrain system for a hybrid vehicle. The hybrid vehicle includes a heat engine, such as a diesel engine, and an electric machine, which operates as both an electric motor and an alternator, to power the vehicle. The hybrid vehicle also includes a manual-style transmission configured to operate as an automatic transmission from the perspective of the driver. The engine and the electric machine drive an input shaft which in turn drives an output shaft of the transmission. In addition to driving the transmission, the electric machine regulates the speed of the input shaft in order to synchronize the input shaft during either an upshift or downshift of the transmission by either decreasing or increasing the speed of the input shaft. When decreasing the speed of the input shaft, the electric motor functions as an alternator to produce electrical energy which may be stored by a storage device. Operation of the transmission is controlled by a transmission controller which receives input signals and generates output signals to control shift and clutch motors to effect smooth launch, upshift shifts, and downshifts of the transmission, so that the transmission functions substantially as an automatic transmission from the perspective of the driver, while internally substantially functioning as a manual transmission.

  16. Automated manual transmission controller

    DOEpatents

    Lawrie, Robert E.; Reed, Jr., Richard G.; Bernier, David R.

    1999-12-28

    A powertrain system for a hybrid vehicle. The hybrid vehicle includes a heat engine, such as a diesel engine, and an electric machine, which operates as both an electric motor and an alternator, to power the vehicle. The hybrid vehicle also includes a manual-style transmission configured to operate as an automatic transmission from the perspective of the driver. The engine and the electric machine drive an input shaft which in turn drives an output shaft of the transmission. In addition to driving the transmission, the electric machine regulates the speed of the input shaft in order to synchronize the input shaft during either an upshift or downshift of the transmission by either decreasing or increasing the speed of the input shaft. When decreasing the speed of the input shaft, the electric motor functions as an alternator to produce electrical energy which may be stored by a storage device. Operation of the transmission is controlled by a transmission controller which receives input signals and generates output signals to control shift and clutch motors to effect smooth launch, upshift shifts, and downshifts of the transmission, so that the transmission functions substantially as an automatic transmission from the perspective of the driver, while internally substantially functioning as a manual transmission.

  17. Production Economics of Private Forestry: A Comparison of Industrial and Nonindustrial Forest Owners

    Treesearch

    David H. Newman; David N. Wear

    1993-01-01

    This paper compares the producrion behavior of industrial and nonindustrial private forestland owners in the southeastern U.S. using a restricted profit function. Profits are modeled as a function of two outputs, sawtimber and pulpwood. one variable input, regeneration effort. and two quasi-fixed inputs, land and growing stock. Although an identical profit function is...

  18. Multiple kernel learning using single stage function approximation for binary classification problems

    NASA Astrophysics Data System (ADS)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  19. The Effects of a Change in the Variability of Irrigation Water

    NASA Astrophysics Data System (ADS)

    Lyon, Kenneth S.

    1983-10-01

    This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."

  20. Sinusoidal input describing function for hysteresis followed by elementary backlash

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.

    1976-01-01

    The author proposes a new sinusoidal input describing function which accounts for the serial combination of hysteresis followed by elementary backlash in a single nonlinear element. The output of the hysteresis element drives the elementary backlash element. Various analytical forms of the describing function are given, depending on the a/A ratio, where a is the half width of the hysteresis band or backlash gap, and A is the amplitude of the assumed input sinusoid, and on the value of the parameter representing the fraction of a attributed to the backlash characteristic. The negative inverse describing function is plotted on a gain-phase plot, and it is seen that a relatively small amount of backlash leads to domination of the backlash character in the describing function. The extent of the region of the gain-phase plane covered by the describing function is such as to guarantee some form of limit cycle behavior in most closed-loop systems.

  1. Inferring topologies via driving-based generalized synchronization of two-layer networks

    NASA Astrophysics Data System (ADS)

    Wang, Yingfei; Wu, Xiaoqun; Feng, Hui; Lu, Jun-an; Xu, Yuhua

    2016-05-01

    The interaction topology among the constituents of a complex network plays a crucial role in the network’s evolutionary mechanisms and functional behaviors. However, some network topologies are usually unknown or uncertain. Meanwhile, coupling delays are ubiquitous in various man-made and natural networks. Hence, it is necessary to gain knowledge of the whole or partial topology of a complex dynamical network by taking into consideration communication delay. In this paper, topology identification of complex dynamical networks is investigated via generalized synchronization of a two-layer network. Particularly, based on the LaSalle-type invariance principle of stochastic differential delay equations, an adaptive control technique is proposed by constructing an auxiliary layer and designing proper control input and updating laws so that the unknown topology can be recovered upon successful generalized synchronization. Numerical simulations are provided to illustrate the effectiveness of the proposed method. The technique provides a certain theoretical basis for topology inference of complex networks. In particular, when the considered network is composed of systems with high-dimension or complicated dynamics, a simpler response layer can be constructed, which is conducive to circuit design. Moreover, it is practical to take into consideration perturbations caused by control input. Finally, the method is applicable to infer topology of a subnetwork embedded within a complex system and locate hidden sources. We hope the results can provide basic insight into further research endeavors on understanding practical and economical topology inference of networks.

  2. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  3. Flexible and re-configurable optical three-input XOR logic gate of phase-modulated signals with multicast functionality for potential application in optical physical-layer network coding.

    PubMed

    Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru

    2016-02-08

    Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.

  4. Exploratory analysis regarding the domain definitions for computer based analytical models

    NASA Astrophysics Data System (ADS)

    Raicu, A.; Oanta, E.; Barhalescu, M.

    2017-08-01

    Our previous computer based studies dedicated to structural problems using analytical methods defined the composite cross section of a beam as a result of Boolean operations with so-called ‘simple’ shapes. Using generalisations, in the class of the ‘simple’ shapes were included areas bounded by curves approximated using spline functions and areas approximated as polygons. However, particular definitions lead to particular solutions. In order to ascend above the actual limitations, we conceived a general definition of the cross sections that are considered now calculus domains consisting of several subdomains. The according set of input data use complex parameterizations. This new vision allows us to naturally assign a general number of attributes to the subdomains. In this way there may be modelled new phenomena that use map-wise information, such as the metal alloys equilibrium diagrams. The hierarchy of the input data text files that use the comma-separated-value format and their structure are also presented and discussed in the paper. This new approach allows us to reuse the concepts and part of the data processing software instruments already developed. The according software to be subsequently developed will be modularised and generalised in order to be used in the upcoming projects that require rapid development of computer based models.

  5. Varied applications of a new maximum-likelihood code with complete covariance capability. [FERRET, for data adjustment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1978-01-01

    Applications of a new data-adjustment code are given. The method is based on a maximum-likelihood extension of generalized least-squares methods that allow complete covariance descriptions for the input data and the final adjusted data evaluations. The maximum-likelihood approach is used with a generalized log-normal distribution that provides a way to treat problems with large uncertainties and that circumvents the problem of negative values that can occur for physically positive quantities. The computer code, FERRET, is written to enable the user to apply it to a large variety of problems by modifying only the input subroutine. The following applications are discussed:more » A 75-group a priori damage function is adjusted by as much as a factor of two by use of 14 integral measurements in different reactor spectra. Reactor spectra and dosimeter cross sections are simultaneously adjusted on the basis of both integral measurements and experimental proton-recoil spectra. The simultaneous use of measured reaction rates, measured worths, microscopic measurements, and theoretical models are used to evaluate dosimeter and fission-product cross sections. Applications in the data reduction of neutron cross section measurements and in the evaluation of reactor after-heat are also considered. 6 figures.« less

  6. 7 CFR 3430.907 - Stakeholder input.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND AGRICULTURE COMPETITIVE AND NONCOMPETITIVE NON-FORMULA FEDERAL ASSISTANCE PROGRAMS-GENERAL AWARD ADMINISTRATIVE PROVISIONS New Era Rural Technology Competitive Grants Program § 3430.907 Stakeholder input...

  7. Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.

    PubMed

    Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F

    1995-02-01

    Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.

  8. Peak-Seeking Control Using Gradient and Hessian Estimates

    NASA Technical Reports Server (NTRS)

    Ryan, John J.; Speyer, Jason L.

    2010-01-01

    A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.

  9. ENHANCED RECOVERY METHODS FOR 85KR AGE-DATING GROUNDWATER: ROYAL WATERSHED, MAINE

    EPA Science Inventory

    Potential widespread use of 85Kr, having a constant input function in the northern hemisphere, for groundwater age-dating would advance watershed investigations. The current input function of tritium is not sufficient to estimate young modern recharge waters. While tri...

  10. Reconstruction of input functions from a dynamic PET image with sequential administration of 15O2 and [Formula: see text] for noninvasive and ultra-rapid measurement of CBF, OEF, and CMRO2.

    PubMed

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro

    2018-05-01

    CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .

  11. The Study on Network Examinational Database based on ASP Technology

    NASA Astrophysics Data System (ADS)

    Zhang, Yanfu; Han, Yuexiao; Zhou, Yanshuang

    This article introduces the structure of the general test base system based on .NET technology, discussing the design of the function modules and its implementation methods. It focuses on key technology of the system, proposing utilizing the WEB online editor control to solve the input problem and regular expression to solve the problem HTML code, making use of genetic algorithm to optimize test paper and the automated tools of WORD to solve the problem of exporting papers and others. Practical effective design and implementation technology can be used as reference for the development of similar systems.

  12. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  13. DESIGN OF A PATTERN RECOGNITION DIGITAL COMPUTER WITH APPLICATION TO THE AUTOMATIC SCANNING OF BUBBLE CHAMBER NEGATIVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCormick, B.H.; Narasimhan, R.

    1963-01-01

    The overall computer system contains three main parts: an input device, a pattern recognition unit (PRU), and a control computer. The bubble chamber picture is divided into a grid of st run. Concent 1-mm squares on the film. It is then processed in parallel in a two-dimensional array of 1024 identical processing modules (stalactites) of the PRU. The array can function as a two- dimensional shift register in which results of successive shifting operations can be accumulated. The pattern recognition process is generally controlled by a conventional arithmetic computer. (A.G.W.)

  14. Computer program for supersonic Kernel-function flutter analysis of thin lifting surfaces

    NASA Technical Reports Server (NTRS)

    Cunningham, H. J.

    1974-01-01

    This report describes a computer program (program D2180) that has been prepared to implement the analysis described in (N71-10866) for calculating the aerodynamic forces on a class of harmonically oscillating planar lifting surfaces in supersonic potential flow. The planforms treated are the delta and modified-delta (arrowhead) planforms with subsonic leading and supersonic trailing edges, and (essentially) pointed tips. The resulting aerodynamic forces are applied in a Galerkin modal flutter analysis. The required input data are the flow and planform parameters including deflection-mode data, modal frequencies, and generalized masses.

  15. Semiparametric Identification of Human Arm Dynamics for Flexible Control of a Functional Electrical Stimulation Neuroprosthesis

    PubMed Central

    Schearer, Eric M.; Liao, Yu-Wei; Perreault, Eric J.; Tresch, Matthew C.; Memberg, William D.; Kirsch, Robert F.; Lynch, Kevin M.

    2016-01-01

    We present a method to identify the dynamics of a human arm controlled by an implanted functional electrical stimulation neuroprosthesis. The method uses Gaussian process regression to predict shoulder and elbow torques given the shoulder and elbow joint positions and velocities and the electrical stimulation inputs to muscles. We compare the accuracy of torque predictions of nonparametric, semiparametric, and parametric model types. The most accurate of the three model types is a semiparametric Gaussian process model that combines the flexibility of a black box function approximator with the generalization power of a parameterized model. The semiparametric model predicted torques during stimulation of multiple muscles with errors less than 20% of the total muscle torque and passive torque needed to drive the arm. The identified model allows us to define an arbitrary reaching trajectory and approximately determine the muscle stimulations required to drive the arm along that trajectory. PMID:26955041

  16. Prediction of enzymatic pathways by integrative pathway mapping

    PubMed Central

    Wichelecki, Daniel J; San Francisco, Brian; Zhao, Suwen; Rodionov, Dmitry A; Vetting, Matthew W; Al-Obaidi, Nawar F; Lin, Henry; O'Meara, Matthew J; Scott, David A; Morris, John H; Russel, Daniel; Almo, Steven C; Osterman, Andrei L

    2018-01-01

    The functions of most proteins are yet to be determined. The function of an enzyme is often defined by its interacting partners, including its substrate and product, and its role in larger metabolic networks. Here, we describe a computational method that predicts the functions of orphan enzymes by organizing them into a linear metabolic pathway. Given candidate enzyme and metabolite pathway members, this aim is achieved by finding those pathways that satisfy structural and network restraints implied by varied input information, including that from virtual screening, chemoinformatics, genomic context analysis, and ligand -binding experiments. We demonstrate this integrative pathway mapping method by predicting the L-gulonate catabolic pathway in Haemophilus influenzae Rd KW20. The prediction was subsequently validated experimentally by enzymology, crystallography, and metabolomics. Integrative pathway mapping by satisfaction of structural and network restraints is extensible to molecular networks in general and thus formally bridges the gap between structural biology and systems biology. PMID:29377793

  17. System and method for ultrafast optical signal detecting via a synchronously coupled anamorphic light pulse encoded laterally

    DOEpatents

    Heebner, John E [Livermore, CA

    2010-08-03

    In one general embodiment, a method for ultrafast optical signal detecting is provided. In operation, a first optical input signal is propagated through a first wave guiding layer of a waveguide. Additionally, a second optical input signal is propagated through a second wave guiding layer of the waveguide. Furthermore, an optical control signal is applied to a top of the waveguide, the optical control signal being oriented diagonally relative to the top of the waveguide such that the application is used to influence at least a portion of the first optical input signal propagating through the first wave guiding layer of the waveguide. In addition, the first and the second optical input signals output from the waveguide are combined. Further, the combined optical signals output from the waveguide are detected. In another general embodiment, a system for ultrafast optical signal recording is provided comprising a waveguide including a plurality of wave guiding layers, an optical control source positioned to propagate an optical control signal towards the waveguide in a diagonal orientation relative to a top of the waveguide, at least one optical input source positioned to input an optical input signal into at least a first and a second wave guiding layer of the waveguide, and a detector for detecting at least one interference pattern output from the waveguide, where at least one of the interference patterns results from a combination of the optical input signals input into the first and the second wave guiding layer. Furthermore, propagation of the optical control signal is used to influence at least a portion of the optical input signal propagating through the first wave guiding layer of the waveguide.

  18. Short-Term Memory in Mathematics-Proficient and Mathematics-Disabled Students as a Function of Input-Modality/Output-Modality Pairings.

    ERIC Educational Resources Information Center

    Webster, Raymond E.

    1980-01-01

    A significant two-way input modality by output modality interaction suggested that short term memory capacity among the groups differed as a function of the modality used to present the items in combination with the output response required. (Author/CL)

  19. Functional Differences between Statistical Learning with and without Explicit Training

    ERIC Educational Resources Information Center

    Batterink, Laura J.; Reber, Paul J.; Paller, Ken A.

    2015-01-01

    Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and…

  20. A fast and accurate online sequential learning algorithm for feedforward networks.

    PubMed

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  1. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons

    PubMed Central

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-01-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter—describing somatic integration—and the spike-history filter—accounting for spike-frequency adaptation—dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations. PMID:26907675

  2. 76 FR 80531 - National Emission Standards for Hazardous Air Pollutants for Area Sources: Industrial, Commercial...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-23

    ... boilers are small (less than 10 MMBtu/hr heat input) and are generally owned and operated by contractors... (> 5MMBtu/h) or five-year ( New boilers with heat input capacity greater than 10 million Btu per hour that... with heat input capacity greater than 10 million Btu per hour that are biomass-fired or oil-fired must...

  3. Persistent Physical Symptoms as Perceptual Dysregulation: A Neuropsychobehavioral Model and Its Clinical Implications.

    PubMed

    Henningsen, Peter; Gündel, Harald; Kop, Willem J; Löwe, Bernd; Martin, Alexandra; Rief, Winfried; Rosmalen, Judith G M; Schröder, Andreas; van der Feltz-Cornelis, Christina; Van den Bergh, Omer

    2018-06-01

    The mechanisms underlying the perception and experience of persistent physical symptoms are not well understood, and in the models, the specific relevance of peripheral input versus central processing, or of neurobiological versus psychosocial factors in general, is not clear. In this article, we proposed a model for this clinical phenomenon that is designed to be coherent with an underlying, relatively new model of the normal brain functions involved in the experience of bodily signals. Based on a review of recent literature, we describe central elements of this model and its clinical implications. In the model, the brain is seen as an active predictive processing or inferential device rather than one that is passively waiting for sensory input. A central aspect of the model is the attempt of the brain to minimize prediction errors that result from constant comparisons of predictions and sensory input. Two possibilities exist: adaptation of the generative model underlying the predictions or alteration of the sensory input via autonomic nervous activation (in the case of interoception). Following this model, persistent physical symptoms can be described as "failures of inference" and clinically well-known factors such as expectation are assigned a role, not only in the later amplification of bodily signals but also in the very basis of symptom perception. We discuss therapeutic implications of such a model including new interpretations for established treatments as well as new options such as virtual reality techniques combining exteroceptive and interoceptive information.

  4. A highly reliable, autonomous data communication subsystem for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Masotto, Thomas; Alger, Linda

    1990-01-01

    The need to meet the stringent performance and reliability requirements of advanced avionics systems has frequently led to implementations which are tailored to a specific application and are therefore difficult to modify or extend. Furthermore, many integrated flight critical systems are input/output intensive. By using a design methodology which customizes the input/output mechanism for each new application, the cost of implementing new systems becomes prohibitively expensive. One solution to this dilemma is to design computer systems and input/output subsystems which are general purpose, but which can be easily configured to support the needs of a specific application. The Advanced Information Processing System (AIPS), currently under development has these characteristics. The design and implementation of the prototype I/O communication system for AIPS is described. AIPS addresses reliability issues related to data communications by the use of reconfigurable I/O networks. When a fault or damage event occurs, communication is restored to functioning parts of the network and the failed or damage components are isolated. Performance issues are addressed by using a parallelized computer architecture which decouples Input/Output (I/O) redundancy management and I/O processing from the computational stream of an application. The autonomous nature of the system derives from the highly automated and independent manner in which I/O transactions are conducted for the application as well as from the fact that the hardware redundancy management is entirely transparent to the application.

  5. Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2018-02-01

    This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Estimation of Optimum Stimulus Amplitude for Balance Training using Electrical Stimulation of the Vestibular System

    NASA Technical Reports Server (NTRS)

    Goel, R.; Rosenberg, M. J.; De Dios, Y. E.; Cohen, H. S.; Bloomberg, J. J.; Mulavara, A. P.

    2016-01-01

    Sensorimotor changes such as posture and gait instabilities can affect the functional performance of astronauts after gravitational transitions. Sensorimotor Adaptability (SA) training can help alleviate decrements on exposure to novel sensorimotor environments based on the concept of 'learning to learn' by exposure to varying sensory challenges during posture and locomotion tasks (Bloomberg 2015). Supra-threshold Stochastic Vestibular Stimulation (SVS) can be used to provide one of many challenges by disrupting vestibular inputs. In this scenario, the central nervous system can be trained to utilize veridical information from other sensory inputs, such as vision and somatosensory inputs, for posture and locomotion control. The minimum amplitude of SVS to simulate the effect of deterioration in vestibular inputs for preflight training or for evaluating vestibular contribution in functional tests in general, however, has not yet been identified. Few studies (MacDougall 2006; Dilda 2014) have used arbitrary but fixed maximum current amplitudes from 3 to 5 mA in the medio-lateral (ML) direction to disrupt balance function in healthy adults. Giving this high level of current amplitude to all the individuals has a risk of invoking side effects such as nausea and discomfort. The goal of this study was to determine the minimum SVS level that yields an equivalently degraded balance performance. Thirteen subjects stood on a compliant foam surface with their eyes closed and were instructed to maintain a stable upright stance. Measures of stability of the head, trunk, and whole body were quantified in the ML direction. Duration of time they could stand on the foam surface was also measured. The minimum SVS dosage was defined to be that level which significantly degraded balance performance such that any further increase in stimulation level did not lead to further balance degradation. The minimum SVS level was determined by performing linear fits on the performance variable at different stimulation levels. Results from the balance task suggest that there are inter-individual differences and the minimum SVS amplitude was found to be in the range of 1 mA to 2.5 mA across subjects. SVS resulted in an average decrement of balance task performance in the range of 62%-73% across different measured variables at the minimum SVS amplitude in comparison to the control trial (no stimulus). Training using supra-threshold SVS stimulation is one of the sensory challenges used for preflight SA training designed to improve adaptability to novel gravitational environments. Inter-individual differences in response to SVS can help customize the SA training paradigms using minimal dosage required. Another application of using SVS is to simulate acute deterioration of vestibular sensory inputs in the evaluation of tests for assessing vestibular function.

  7. Smart mobility solution with multiple input Output interface.

    PubMed

    Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya

    2017-07-01

    Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.

  8. Real-Time Adaptive Control of Flow-Induced Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kegerise, Michael A.; Cabell, Randolph H.; Cattafesta, Louis N.

    2004-01-01

    An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. The adaptive control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. The algorithm was also able t o maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Controller performance was evaluated with a measure of output disturbance rejection and an input sensitivity transfer function. The results suggest that disturbances entering the cavity flow are colocated with the control input at the cavity leading edge. In that case, only tonal components of the cavity wall-pressure fluctuations can be suppressed and arbitrary broadband pressure reduction is not possible. In the control-algorithm development, the cavity dynamics are treated as linear and time invariant (LTI) for a fixed Mach number. The experimental results lend support this treatment.

  9. Combining evidence using likelihood ratios in writer verification

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory

    2013-01-01

    Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.

  10. A general method for generating bathymetric data for hydrodynamic computer models

    USGS Publications Warehouse

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  11. Dynamical Characteristics Common to Neuronal Competition Models

    PubMed Central

    Shpiro, Asya; Curtu, Rodica; Rinzel, John; Rubin, Nava

    2009-01-01

    Models implementing neuronal competition by reciprocally inhibitory populations are widely used to characterize bistable phenomena such as binocular rivalry. We find common dynamical behavior in several models of this general type, which differ in their architecture in the form of their gain functions, and in how they implement the slow process that underlies alternating dominance. We focus on examining the effect of the input strength on the rate (and existence) of oscillations. In spite of their differences, all considered models possess similar qualitative features, some of which we report here for the first time. Experimentally, dominance durations have been reported to decrease monotonically with increasing stimulus strength (such as Levelt's “Proposition IV”). The models predict this behavior; however, they also predict that at a lower range of input strength dominance durations increase with increasing stimulus strength. The nonmonotonic dependency of duration on stimulus strength is common to both deterministic and stochastic models. We conclude that additional experimental tests of Levelt's Proposition IV are needed to reconcile models and perception. PMID:17065254

  12. A Compilation of MATLAB Scripts and Functions for MACGMC Analyses

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Bednarcyk, Brett A.; Mital, Subodh K.

    2017-01-01

    The primary aim of the current effort is to provide scripts that automate many of the repetitive pre- and post-processing tasks associated with composite materials analyses using the Micromechanics Analysis Code with the Generalized Method of Cells. This document consists of a compilation of hundreds of scripts that were developed in MATLAB (The Mathworks, Inc., Natick, MA) programming language and consolidated into 16 MATLAB functions. (MACGMC). MACGMC is a composite material and laminate analysis software code developed at NASA Glenn Research Center. The software package has been built around the generalized method of cells (GMC) family of micromechanics theories. The computer code is developed with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, application of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material have been automated to increase the user friendliness, as well as to make it more robust in terms of input preparation and code execution. Finally, classical lamination theory has been implemented within the software, wherein GMC is used to model the composite material response of each ply. Thus, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. The pre-processing tasks include generation of a multitude of different repeating unit cells (RUCs) for CMCs and PMCs, visualization of RUCs from MACGMC input and output files and generation of the RUC section of a MACGMC input file. The post-processing tasks include visualization of the predicted composite response, such as local stress and strain contours, damage initiation and progression, stress-strain behavior, and fatigue response. In addition to the above, several miscellaneous scripts have been developed that can be used to perform repeated Monte-Carlo simulations to enable probabilistic simulations with minimal manual intervention. This document is formatted to provide MATLAB source files and descriptions of how to utilize them. It is assumed that the user has a basic understanding of how MATLAB scripts work and some MATLAB programming experience.

  13. Synaptology of physiologically identified ganglion cells in the cat retina: a comparison of retinal X- and Y-cells.

    PubMed

    Weber, A J; Stanford, L R

    1994-05-15

    It has long been known that a number of functionally different types of ganglion cells exist in the cat retina, and that each responds differently to visual stimulation. To determine whether the characteristic response properties of different retinal ganglion cell types might reflect differences in the number and distribution of their bipolar and amacrine cell inputs, we compared the percentages and distributions of the synaptic inputs from bipolar and amacrine cells to the entire dendritic arbors of physiologically characterized retinal X- and Y-cells. Sixty-two percent of the synaptic input to the Y-cell was from amacrine cell terminals, while the X-cells received approximately equal amounts of input from amacrine and bipolar cells. We found no significant difference in the distributions of bipolar or amacrine cell inputs to X- and Y-cells, or ON-center and OFF-center cells, either as a function of dendritic branch order or distance from the origin of the dendritic arbor. While, on the basis of these data, we cannot exclude the possibility that the difference in the proportion of bipolar and amacrine cell input contributes to the functional differences between X- and Y-cells, the magnitude of this difference, and the similarity in the distributions of the input from the two afferent cell types, suggest that mechanisms other than a simple predominance of input from amacrine or bipolar cells underlie the differences in their response properties. More likely, perhaps, is that the specific response features of X- and Y-cells originate in differences in the visual responses of the bipolar and amacrine cells that provide their input, or in the complex synaptic arrangements found among amacrine and bipolar cell terminals and the dendrites of specific types of retinal ganglion cells.

  14. Paradox of pattern separation and adult neurogenesis: A dual role for new neurons balancing memory resolution and robustness.

    PubMed

    Johnston, Stephen T; Shtrahman, Matthew; Parylak, Sarah; Gonçalves, J Tiago; Gage, Fred H

    2016-03-01

    Hippocampal adult neurogenesis is thought to subserve pattern separation, the process by which similar patterns of neuronal inputs are transformed into distinct neuronal representations, permitting the discrimination of highly similar stimuli in hippocampus-dependent tasks. However, the mechanism by which immature adult-born dentate granule neurons cells (abDGCs) perform this function remains unknown. Two theories of abDGC function, one by which abDGCs modulate and sparsify activity in the dentate gyrus and one by which abDGCs act as autonomous coding units, are generally suggested to be mutually exclusive. This review suggests that these two mechanisms work in tandem to dynamically regulate memory resolution while avoiding memory interference and maintaining memory robustness. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Altered functional connectivity of the amygdaloid input nuclei in adolescents and young adults with autism spectrum disorder: a resting state fMRI study.

    PubMed

    Rausch, Annika; Zhang, Wei; Haak, Koen V; Mennes, Maarten; Hermans, Erno J; van Oort, Erik; van Wingen, Guido; Beckmann, Christian F; Buitelaar, Jan K; Groen, Wouter B

    2016-01-01

    Amygdala dysfunction is hypothesized to underlie the social deficits observed in autism spectrum disorders (ASD). However, the neurobiological basis of this hypothesis is underspecified because it is unknown whether ASD relates to abnormalities of the amygdaloid input or output nuclei. Here, we investigated the functional connectivity of the amygdaloid social-perceptual input nuclei and emotion-regulation output nuclei in ASD versus controls. We collected resting state functional magnetic resonance imaging (fMRI) data, tailored to provide optimal sensitivity in the amygdala as well as the neocortex, in 20 adolescents and young adults with ASD and 25 matched controls. We performed a regular correlation analysis between the entire amygdala (EA) and the whole brain and used a partial correlation analysis to investigate whole-brain functional connectivity uniquely related to each of the amygdaloid subregions. Between-group comparison of regular EA correlations showed significantly reduced connectivity in visuospatial and superior parietal areas in ASD compared to controls. Partial correlation analysis revealed that this effect was driven by the left superficial and right laterobasal input subregions, but not the centromedial output nuclei. These results indicate reduced connectivity of specifically the amygdaloid sensory input channels in ASD, suggesting that abnormal amygdalo-cortical connectivity can be traced down to the socio-perceptual pathways.

  16. Layer- and cell-type-specific subthreshold and suprathreshold effects of long-term monocular deprivation in rat visual cortex.

    PubMed

    Medini, Paolo

    2011-11-23

    Connectivity and dendritic properties are determinants of plasticity that are layer and cell-type specific in the neocortex. However, the impact of experience-dependent plasticity at the level of synaptic inputs and spike outputs remains unclear along vertical cortical microcircuits. Here I compared subthreshold and suprathreshold sensitivity to prolonged monocular deprivation (MD) in rat binocular visual cortex in layer 4 and layer 2/3 pyramids (4Ps and 2/3Ps) and in thick-tufted and nontufted layer 5 pyramids (5TPs and 5NPs), which innervate different extracortical targets. In normal rats, 5TPs and 2/3Ps are the most binocular in terms of synaptic inputs, and 5NPs are the least. Spike responses of all 5TPs were highly binocular, whereas those of 2/3Ps were dominated by either the contralateral or ipsilateral eye. MD dramatically shifted the ocular preference of 2/3Ps and 4Ps, mostly by depressing deprived-eye inputs. Plasticity was profoundly different in layer 5. The subthreshold ocular preference shift was sevenfold smaller in 5TPs because of smaller depression of deprived inputs combined with a generalized loss of responsiveness, and was undetectable in 5NPs. Despite their modest ocular dominance change, spike responses of 5TPs consistently lost their typically high binocularity during MD. The comparison of MD effects on 2/3Ps and 5TPs, the main affected output cells of vertical microcircuits, indicated that subthreshold plasticity is not uniquely determined by the initial degree of input binocularity. The data raise the question of whether 5TPs are driven solely by 2/3Ps during MD. The different suprathreshold plasticity of the two cell populations could underlie distinct functional deficits in amblyopia.

  17. Glacial reduction of AMOC strength and long-term transition in weathering inputs into the Southern Ocean since the mid-Miocene: Evidence from radiogenic Nd and Hf isotopes

    NASA Astrophysics Data System (ADS)

    Dausmann, Veit; Frank, Martin; Gutjahr, Marcus; Rickli, Jörg

    2017-03-01

    Combined seawater radiogenic hafnium (Hf) and neodymium (Nd) isotope compositions were extracted from bulk sediment leachates and foraminifera of Site 1088, Ocean Drilling Program Leg 177, 2082 m water depth on the Agulhas Ridge. The new data provide a continuous reconstruction of long- and short-term changes in ocean circulation and continental weathering inputs since the mid-Miocene. Due to its intermediate water depth, the sediments of this core sensitively recorded changes in admixture of North Atlantic Deep Water to the Antarctic Circumpolar Current as a function of the strength of the Atlantic Meridional Overturning Circulation (AMOC). Nd isotope compositions (ɛNd) range from -7 to -11 with glacial values generally 1 to 3 units more radiogenic than during the interglacials of the Quaternary. The data reveal episodes of significantly increased AMOC strength during late Miocene and Pliocene warm periods, whereas peak radiogenic ɛNd values mark a strongly diminished AMOC during the major intensification of Northern Hemisphere Glaciation near 2.8 Ma and in the Pleistocene after 1.5 Ma. In contrast, the Hf isotope compositions (ɛHf) show an essentially continuous evolution from highly radiogenic values of up to +11 during the Miocene to less radiogenic present-day values (+2 to +4) during the late Quaternary. The data document a long-term transition in dominant weathering inputs, where inputs from South America are replaced by those from Southern Africa. Moreover, radiogenic peaks provide evidence for the supply of radiogenic Hf originating from Patagonian rocks to the Atlantic sector of the Southern Ocean via dust inputs.

  18. SNP ID-info: SNP ID searching and visualization platform.

    PubMed

    Yang, Cheng-Hong; Chuang, Li-Yeh; Cheng, Yu-Huei; Wen, Cheng-Hao; Chang, Phei-Lang; Chang, Hsueh-Wei

    2008-09-01

    Many association studies provide the relationship between single nucleotide polymorphisms (SNPs), diseases and cancers, without giving a SNP ID, however. Here, we developed the SNP ID-info freeware to provide the SNP IDs within inputting genetic and physical information of genomes. The program provides an "SNP-ePCR" function to generate the full-sequence using primers and template inputs. In "SNPosition," sequence from SNP-ePCR or direct input is fed to match the SNP IDs from SNP fasta-sequence. In "SNP search" and "SNP fasta" function, information of SNPs within the cytogenetic band, contig position, and keyword input are acceptable. Finally, the SNP ID neighboring environment for inputs is completely visualized in the order of contig position and marked with SNP and flanking hits. The SNP identification problems inherent in NCBI SNP BLAST are also avoided. In conclusion, the SNP ID-info provides a visualized SNP ID environment for multiple inputs and assists systematic SNP association studies. The server and user manual are available at http://bio.kuas.edu.tw/snpid-info.

  19. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex

    PubMed Central

    Wilson, Daniel E.; Whitney, David E.; Scholl, Benjamin; Fitzpatrick, David

    2016-01-01

    The majority of neurons in primary visual cortex are tuned for stimulus orientation, but the factors that account for the range of orientation selectivities exhibited by cortical neurons remain unclear. To address this issue, we used in vivo 2-photon calcium imaging to characterize the orientation tuning and spatial arrangement of synaptic inputs to the dendritic spines of individual pyramidal neurons in layer 2/3 of ferret visual cortex. The summed synaptic input to individual neurons reliably predicted the neuron’s orientation preference, but did not account for differences in orientation selectivity among neurons. These differences reflected a robust input-output nonlinearity that could not be explained by spike threshold alone, and was strongly correlated with the spatial clustering of co-tuned synaptic inputs within the dendritic field. Dendritic branches with more co-tuned synaptic clusters exhibited greater rates of local dendritic calcium events supporting a prominent role for functional clustering of synaptic inputs in dendritic nonlinearities that shape orientation selectivity. PMID:27294510

  20. User's manual for master: Modeling of aerodynamic surfaces by 3-dimensional explicit representation. [input to three dimensional computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gibson, S. G.

    1983-01-01

    A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.

  1. Analysis and Characterization of Damage and Failure Utilizing a Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Khaled, Bilal; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in state-of-the art composite impact models is under development. In particular, a next generation composite impact material model, jointly developed by the FAA and NASA, is being implemented into the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage, and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters (such as modulus and strength). The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in the various coordinate directions. Due to the fact that the plasticity and damage models are uncoupled, test procedures and methods to both characterize the damage model and to covert the material stress-strain curves from the true (damaged) stress space to the effective (undamaged) stress space have been developed. A methodology has been developed to input the experimentally determined composite failure surface in a tabulated manner. An analytical approach is then utilized to track how close the current stress state is to the failure surface.

  2. Soft-Input Soft-Output Modules for the Construction and Distributed Iterative Decoding of Code Networks

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.

    1998-01-01

    Soft-input soft-output building blocks (modules) are presented to construct and iteratively decode in a distributed fashion code networks, a new concept that includes, and generalizes, various forms of concatenated coding schemes.

  3. Universal Approximation by Using the Correntropy Objective Function.

    PubMed

    Nayyeri, Mojtaba; Sadoghi Yazdi, Hadi; Maskooki, Alaleh; Rouhani, Modjtaba

    2017-10-16

    Several objective functions have been proposed in the literature to adjust the input parameters of a node in constructive networks. Furthermore, many researchers have focused on the universal approximation capability of the network based on the existing objective functions. In this brief, we use a correntropy measure based on the sigmoid kernel in the objective function to adjust the input parameters of a newly added node in a cascade network. The proposed network is shown to be capable of approximating any continuous nonlinear mapping with probability one in a compact input sample space. Thus, the convergence is guaranteed. The performance of our method was compared with that of eight different objective functions, as well as with an existing one hidden layer feedforward network on several real regression data sets with and without impulsive noise. The experimental results indicate the benefits of using a correntropy measure in reducing the root mean square error and increasing the robustness to noise.

  4. Three-input majority function as the unique optimal function for the bias amplification using nonlocal boxes

    NASA Astrophysics Data System (ADS)

    Mori, Ryuhei

    2016-11-01

    Brassard et al. [Phys. Rev. Lett. 96, 250401 (2006), 10.1103/PhysRevLett.96.250401] showed that shared nonlocal boxes with a CHSH (Clauser, Horne, Shimony, and Holt) probability greater than 3/+√{6 } 6 yield trivial communication complexity. There still exists a gap with the maximum CHSH probability 2/+√{2 } 4 achievable by quantum mechanics. It is an interesting open question to determine the exact threshold for the trivial communication complexity. Brassard et al.'s idea is based on recursive bias amplification by the three-input majority function. It was not obvious if another choice of function exhibits stronger bias amplification. We show that the three-input majority function is the unique optimal function, so that one cannot improve the threshold 3/+√{6 } 6 by Brassard et al.'s bias amplification. In this work, protocols for computing the function used for the bias amplification are restricted to be nonadaptive protocols or a particular adaptive protocol inspired by Pawłowski et al.'s protocol for information causality [Nature (London) 461, 1101 (2009), 10.1038/nature08400]. We first show an adaptive protocol inspired by Pawłowski et al.'s protocol, and then show that the adaptive protocol improves upon nonadaptive protocols. Finally, we show that the three-input majority function is the unique optimal function for the bias amplification if we apply the adaptive protocol to each step of the bias amplification.

  5. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-11-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  6. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  7. Automated FES for Upper Limb Rehabilitation Following Stroke and Spinal Cord Injury.

    PubMed

    Hodkin, Edmund F; Lei, Yuming; Humby, Jonathan; Glover, Isabel S; Choudhury, Supriyo; Kumar, Hrishikesh; Perez, Monica A; Rodgers, Helen; Jackson, Andrew

    2018-05-01

    Neurorehabilitation aims to induce beneficial neural plasticity in order to restore function following injury to the nervous system. There is an increasing evidence that appropriately timed functional electrical stimulation (FES) can promote associative plasticity, but the dosage is critical for lasting functional benefits. Here, we present a novel approach to closed-loop control of muscle stimulation for the rehabilitation of reach-to-grasp movements following stroke and spinal cord injury (SCI). We developed a simple, low-cost device to deliver assistive stimulation contingent on users' self-initiated movements. The device allows repeated practice with minimal input by a therapist, and is potentially suitable for home use. Pilot data demonstrate usability by people with upper limb weakness following SCI and stroke, and participant feedback was positive. Moreover, repeated training with the device over 1-2 weeks led to functional benefits on a general object manipulation assessment. Thus, automated FES delivered by this novel device may provide a promising and readily translatable therapy for upper limb rehabilitation for people with stroke and SCI.

  8. Barrier island forest ecosystem: role of meteorologic nutrient inputs.

    PubMed

    Art, H W; Bormann, F H; Voigt, G K; Woodwell, G M

    1974-04-05

    The Sunken Forest, located on Fire Island, a barrier island in the Atlantic Ocean off Long Island, New York, is an ecosystem in which most of the basic cation input is in the form of salt spray. This meteorologic input is sufficient to compensate for the lack of certain nutrients in the highly weathered sandy soils. In other ecosystems these nutrients are generally supplied by weathering of soil particles. The compensatory effect of meteorologic input allows for primary production rates in the Sunken Forest similar to those of inland temperate forests.

  9. Step-control of electromechanical systems

    DOEpatents

    Lewis, Robert N.

    1979-01-01

    The response of an automatic control system to a general input signal is improved by applying a test input signal, observing the response to the test input signal and determining correctional constants necessary to provide a modified input signal to be added to the input to the system. A method is disclosed for determining correctional constants. The modified input signal, when applied in conjunction with an operating signal, provides a total system output exhibiting an improved response. This method is applicable to open-loop or closed-loop control systems. The method is also applicable to unstable systems, thus allowing controlled shut-down before dangerous or destructive response is achieved and to systems whose characteristics vary with time, thus resulting in improved adaptive systems.

  10. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  11. Investigation of Multi-Input Multi-Output Robust Control Methods to Handle Parametric Uncertainties in Autopilot Design.

    PubMed

    Kasnakoğlu, Coşku

    2016-01-01

    Some level of uncertainty is unavoidable in acquiring the mass, geometry parameters and stability derivatives of an aerial vehicle. In certain instances tiny perturbations of these could potentially cause considerable variations in flight characteristics. This research considers the impact of varying these parameters altogether. This is a generalization of examining the effects of particular parameters on selected modes present in existing literature. Conventional autopilot designs commonly assume that each flight channel is independent and develop single-input single-output (SISO) controllers for every one, that are utilized in parallel for actual flight. It is demonstrated that an attitude controller built like this can function flawlessly on separate nominal cases, but can become unstable with a perturbation no more than 2%. Two robust multi-input multi-output (MIMO) design strategies, specifically loop-shaping and μ-synthesis are outlined as potential substitutes and are observed to handle large parametric changes of 30% while preserving decent performance. Duplicating the loop-shaping procedure for the outer loop, a complete flight control system is formed. It is confirmed through software-in-the-loop (SIL) verifications utilizing blade element theory (BET) that the autopilot is capable of navigation and landing exposed to high parametric variations and powerful winds.

  12. Investigation of Multi-Input Multi-Output Robust Control Methods to Handle Parametric Uncertainties in Autopilot Design

    PubMed Central

    Kasnakoğlu, Coşku

    2016-01-01

    Some level of uncertainty is unavoidable in acquiring the mass, geometry parameters and stability derivatives of an aerial vehicle. In certain instances tiny perturbations of these could potentially cause considerable variations in flight characteristics. This research considers the impact of varying these parameters altogether. This is a generalization of examining the effects of particular parameters on selected modes present in existing literature. Conventional autopilot designs commonly assume that each flight channel is independent and develop single-input single-output (SISO) controllers for every one, that are utilized in parallel for actual flight. It is demonstrated that an attitude controller built like this can function flawlessly on separate nominal cases, but can become unstable with a perturbation no more than 2%. Two robust multi-input multi-output (MIMO) design strategies, specifically loop-shaping and μ-synthesis are outlined as potential substitutes and are observed to handle large parametric changes of 30% while preserving decent performance. Duplicating the loop-shaping procedure for the outer loop, a complete flight control system is formed. It is confirmed through software-in-the-loop (SIL) verifications utilizing blade element theory (BET) that the autopilot is capable of navigation and landing exposed to high parametric variations and powerful winds. PMID:27783706

  13. Motion sickness is linked to nystagmus-related trigeminal brain stem input: a new hypothesis.

    PubMed

    Gupta, Vinod Kumar

    2005-01-01

    Motion sickness is a common and distressing but poorly understood syndrome associated with nausea/vomiting and autonomic nervous system accompaniments that develops in the air or space as well as on sea or land. A bidirectional aetiologic link prevails between migraine and motion-sickness. Motion sickness provokes jerk nystagmus induced by both optokinetic and vestibular stimulation. Fixation of gaze or closure of eyes generally prevents motion sickness while vestibular otolithic function is eliminated in microgravity of space, indicating a predominant pathogenetic role for visuo-sensory input. Scopolamine, dimenhydrinate, and promethazine reduce motion-related nystagmus. Contraction of extraocular muscles generates proprioceptive neural traffic and can provoke an ocular hypertensive response. It is proposed that repetitive contractions of the extraocular muscles during motion-related jerk nystagmus rapidly augment brain stem afferent input by increasing proprioceptive neural traffic through connections of the oculomotor nerves with the ophthalmic nerve in the lateral wall of the cavernous sinus as well as by raising the intraocular pressure thereby stimulating anterior segment ocular trigeminal nerve fibers. This verifiable hypothesis defines the pathophysiological basis of individual susceptibility to motion sickness, elucidates the preventive mechanism of gaze fixation or ocular closure, advances the aetiologic link between MS and migraine, rationalizes the mechanism of known preventive drugs, and explores new therapeutic possibilities.

  14. Muscle synergies in neuroscience and robotics: from input-space to task-space perspectives.

    PubMed

    Alessandro, Cristiano; Delis, Ioannis; Nori, Francesco; Panzeri, Stefano; Berret, Bastien

    2013-01-01

    In this paper we review the works related to muscle synergies that have been carried-out in neuroscience and control engineering. In particular, we refer to the hypothesis that the central nervous system (CNS) generates desired muscle contractions by combining a small number of predefined modules, called muscle synergies. We provide an overview of the methods that have been employed to test the validity of this scheme, and we show how the concept of muscle synergy has been generalized for the control of artificial agents. The comparison between these two lines of research, in particular their different goals and approaches, is instrumental to explain the computational implications of the hypothesized modular organization. Moreover, it clarifies the importance of assessing the functional role of muscle synergies: although these basic modules are defined at the level of muscle activations (input-space), they should result in the effective accomplishment of the desired task. This requirement is not always explicitly considered in experimental neuroscience, as muscle synergies are often estimated solely by analyzing recorded muscle activities. We suggest that synergy extraction methods should explicitly take into account task execution variables, thus moving from a perspective purely based on input-space to one grounded on task-space as well.

  15. Presentation planning using an integrated knowledge base

    NASA Technical Reports Server (NTRS)

    Arens, Yigal; Miller, Lawrence; Sondheimer, Norman

    1988-01-01

    A description is given of user interface research aimed at bringing together multiple input and output modes in a way that handles mixed mode input (commands, menus, forms, natural language), interacts with a diverse collection of underlying software utilities in a uniform way, and presents the results through a combination of output modes including natural language text, maps, charts and graphs. The system, Integrated Interfaces, derives much of its ability to interact uniformly with the user and the underlying services and to build its presentations, from the information present in a central knowledge base. This knowledge base integrates models of the application domain (Navy ships in the Pacific region, in the current demonstration version); the structure of visual displays and their graphical features; the underlying services (data bases and expert systems); and interface functions. The emphasis is on a presentation planner that uses the knowledge base to produce multi-modal output. There has been a flurry of recent work in user interface management systems. (Several recent examples are listed in the references). Existing work is characterized by an attempt to relieve the software designer of the burden of handcrafting an interface for each application. The work has generally focused on intelligently handling input. This paper deals with the other end of the pipeline - presentations.

  16. Fractional cable model for signal conduction in spiny neuronal dendrites

    NASA Astrophysics Data System (ADS)

    Vitali, Silvia; Mainardi, Francesco

    2017-06-01

    The cable model is widely used in several fields of science to describe the propagation of signals. A relevant medical and biological example is the anomalous subdiffusion in spiny neuronal dendrites observed in several studies of the last decade. Anomalous subdiffusion can be modelled in several ways introducing some fractional component into the classical cable model. The Chauchy problem associated to these kind of models has been investigated by many authors, but up to our knowledge an explicit solution for the signalling problem has not yet been published. Here we propose how this solution can be derived applying the generalized convolution theorem (known as Efros theorem) for Laplace transforms. The fractional cable model considered in this paper is defined by replacing the first order time derivative with a fractional derivative of order α ∈ (0, 1) of Caputo type. The signalling problem is solved for any input function applied to the accessible end of a semi-infinite cable, which satisfies the requirements of the Efros theorem. The solutions corresponding to the simple cases of impulsive and step inputs are explicitly calculated in integral form containing Wright functions. Thanks to the variability of the parameter α, the corresponding solutions are expected to adapt to the qualitative behaviour of the membrane potential observed in experiments better than in the standard case α = 1.

  17. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  18. Factors leading to different viability predictions for a grizzly bear data set

    USGS Publications Warehouse

    Mills, L.S.; Hayes, S.G.; Wisdom, M.J.; Citta, J.; Mattson, D.J.; Murphy, K.

    1996-01-01

    Population viability analysis programs are being used increasingly in research and management applications, but there has not been a systematic study of the congruence of different program predictions based on a single data set. We performed such an analysis using four population viability analysis computer programs: GAPPS, INMAT, RAMAS/AGE, and VORTEX. The standardized demographic rates used in all programs were generalized from hypothetical increasing and decreasing grizzly bear (Ursus arctos horribilis) populations. Idiosyncracies of input format for each program led to minor differences in intrinsic growth rates that translated into striking differences in estimates of extinction rates and expected population size. In contrast, the addition of demographic stochasticity, environmental stochasticity, and inbreeding costs caused only a small divergence in viability predictions. But, the addition of density dependence caused large deviations between the programs despite our best attempts to use the same density-dependent functions. Population viability programs differ in how density dependence is incorporated, and the necessary functions are difficult to parameterize accurately. Thus, we recommend that unless data clearly suggest a particular density-dependent model, predictions based on population viability analysis should include at least one scenario without density dependence. Further, we describe output metrics that may differ between programs; development of future software could benefit from standardized input and output formats across different programs.

  19. Thalamic control of sensory selection in divided attention.

    PubMed

    Wimmer, Ralf D; Schmitt, L Ian; Davidson, Thomas J; Nakajima, Miho; Deisseroth, Karl; Halassa, Michael M

    2015-10-29

    How the brain selects appropriate sensory inputs and suppresses distractors is unknown. Given the well-established role of the prefrontal cortex (PFC) in executive function, its interactions with sensory cortical areas during attention have been hypothesized to control sensory selection. To test this idea and, more generally, dissect the circuits underlying sensory selection, we developed a cross-modal divided-attention task in mice that allowed genetic access to this cognitive process. By optogenetically perturbing PFC function in a temporally precise window, the ability of mice to select appropriately between conflicting visual and auditory stimuli was diminished. Equivalent sensory thalamocortical manipulations showed that behaviour was causally dependent on PFC interactions with the sensory thalamus, not sensory cortex. Consistent with this notion, we found neurons of the visual thalamic reticular nucleus (visTRN) to exhibit PFC-dependent changes in firing rate predictive of the modality selected. visTRN activity was causal to performance as confirmed by bidirectional optogenetic manipulations of this subnetwork. Using a combination of electrophysiology and intracellular chloride photometry, we demonstrated that visTRN dynamically controls visual thalamic gain through feedforward inhibition. Our experiments introduce a new subcortical model of sensory selection, in which the PFC biases thalamic reticular subnetworks to control thalamic sensory gain, selecting appropriate inputs for further processing.

  20. Linking age, survival, and transit time distributions

    NASA Astrophysics Data System (ADS)

    Calabrese, Salvatore; Porporato, Amilcare

    2015-10-01

    Although the concepts of age, survival, and transit time have been widely used in many fields, including population dynamics, chemical engineering, and hydrology, a comprehensive mathematical framework is still missing. Here we discuss several relationships among these quantities by starting from the evolution equation for the joint distribution of age and survival, from which the equations for age and survival time readily follow. It also becomes apparent how the statistical dependence between age and survival is directly related to either the age dependence of the loss function or the survival-time dependence of the input function. The solution of the joint distribution equation also allows us to obtain the relationships between the age at exit (or death) and the survival time at input (or birth), as well as to stress the symmetries of the various distributions under time reversal. The transit time is then obtained as a sum of the age and survival time, and its properties are discussed along with the general relationships between their mean values. The special case of steady state case is analyzed in detail. Some examples, inspired by hydrologic applications, are presented to illustrate the theory with the specific results. This article was corrected on 11 Nov 2015. See the end of the full text for details.

  1. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  2. Rail-to-rail differential input amplification stage with main and surrogate differential pairs

    DOEpatents

    Britton, Jr., Charles Lanier; Smith, Stephen Fulton

    2007-03-06

    An operational amplifier input stage provides a symmetrical rail-to-rail input common-mode voltage without turning off either pair of complementary differential input transistors. Secondary, or surrogate, transistor pairs assume the function of the complementary differential transistors. The circuit also maintains essentially constant transconductance, constant slew rate, and constant signal-path supply current as it provides rail-to-rail operation.

  3. Cloud Intrusion Detection and Repair (CIDAR)

    DTIC Science & Technology

    2016-02-01

    form for VLC , Swftools-png2swf, Swftools-jpeg2swf, Dillo and GIMP. The superscript indicates the bit width of each expression atom. “sext(v, w... challenges in input rectification is the need to deal with nested fields. In general, input formats are in tree structures containing arbitrarily...length indicator constraints is challeng - ing, because of the presence of nested fields in hierarchical input format. For example, an integer field may

  4. A general prediction model for the detection of ADHD and Autism using structural and functional MRI.

    PubMed

    Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G

    2018-01-01

    This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.

  5. Capacity of the Generalized Pulse-Position Modulation Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, J.; Klimesh, M.; McElience, R.; Moision, B.

    2005-01-01

    We show the capacity of a generalized pulse-position modulation (PPM) channel, where the input vectors may be any set that allows a transitive group of coordinate permutations, is achieved by a uniform input distribution. We derive a simple expression in terms of the Kullback Leibler distance for the binary case, and the asymptote in the PPM order. We prove a sub-additivity result for the PPM channel and use it to show PPM capacity is monotonic in the order.

  6. Thermal APU/hydraulics analysis program. User's guide and programmer's manual

    NASA Technical Reports Server (NTRS)

    Deluna, T. A.

    1976-01-01

    The User's Guide information plus program description necessary to run and have a general understanding of the Thermal APU/Hydraulics Analysis Program (TAHAP) is described. This information consists of general descriptions of the APU/hydraulic system and the TAHAP model, input and output data descriptions, and specific subroutine requirements. Deck setups and input data formats are included and other necessary and/or helpful information for using TAHAP is given. The math model descriptions for the driver program and each of its supporting subroutines are outlined.

  7. Solving the two-dimensional Fokker-Planck equation for strongly correlated neurons

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-01-01

    Pairs of neurons in brain networks often share much of the input they receive from other neurons. Due to essential nonlinearities of the neuronal dynamics, the consequences for the correlation of the output spike trains are generally not well understood. Here we analyze the case of two leaky integrate-and-fire neurons using an approach which is nonperturbative with respect to the degree of input correlation. Our treatment covers both weakly and strongly correlated dynamics, generalizing previous results based on linear response theory.

  8. Degree of coupling and efficiency of energy converters far-from-equilibrium

    NASA Astrophysics Data System (ADS)

    Vroylandt, Hadrien; Lacoste, David; Verley, Gatien

    2018-02-01

    In this paper, we introduce a real symmetric and positive semi-definite matrix, which we call the non-equilibrium conductance matrix, and which generalizes the Onsager response matrix for a system in a non-equilibrium stationary state. We then express the thermodynamic efficiency in terms of the coefficients of this matrix using a parametrization similar to the one used near equilibrium. This framework, then valid arbitrarily far from equilibrium allows to set bounds on the thermodynamic efficiency by a universal function depending only on the degree of coupling between input and output currents. It also leads to new general power-efficiency trade-offs valid for macroscopic machines that are compared to trade-offs previously obtained from uncertainty relations. We illustrate our results on an unicycle heat to heat converter and on a discrete model of a molecular motor.

  9. Global identifiability of linear compartmental models--a computer algebra algorithm.

    PubMed

    Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C

    1998-01-01

    A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.

  10. Study of the similarity function in Indexing-First-One hashing

    NASA Astrophysics Data System (ADS)

    Lai, Y.-L.; Jin, Z.; Goi, B.-M.; Chai, T.-Y.

    2017-06-01

    The recent proposed Indexing-First-One (IFO) hashing is a latest technique that is particularly adopted for eye iris template protection, i.e. IrisCode. However, IFO employs the measure of Jaccard Similarity (JS) initiated from Min-hashing has yet been adequately discussed. In this paper, we explore the nature of JS in binary domain and further propose a mathematical formulation to generalize the usage of JS, which is subsequently verified by using CASIA v3-Interval iris database. Our study reveals that JS applied in IFO hashing is a generalized version in measure two input objects with respect to Min-Hashing where the coefficient of JS is equal to one. With this understanding, IFO hashing can propagate the useful properties of Min-hashing, i.e. similarity preservation, thus favorable for similarity searching or recognition in binary space.

  11. Unconditional security from noisy quantum storage

    NASA Astrophysics Data System (ADS)

    Wehner, Stephanie

    2010-03-01

    We consider the implementation of two-party cryptographic primitives based on the sole physical assumption that no large-scale reliable quantum storage is available to the cheating party. An important example of such a task is secure identification. Here, Alice wants to identify herself to Bob (possibly an ATM machine) without revealing her password. More generally, Alice and Bob wish to solve problems where Alice holds an input x (e.g. her password), and Bob holds an input y (e.g. the password an honest Alice should possess), and they want to obtain the value of some function f(x,y) (e.g. the equality function). Security means that the legitimate users should not learn anything beyond this specification. That is, Alice should not learn anything about y and Bob should not learn anything about x, other than what they may be able to infer from the value of f(x,y). We show that any such problem can be solved securely in the noisy-storage model by constructing protocols for bit commitment and oblivious transfer, where we prove security against the most general attack. Our protocols can be implemented with present-day hardware used for quantum key distribution. In particular, no quantum storage is required for the honest parties. Our work raises a large number of immediate theoretical as well as experimental questions related to many aspects of quantum information science, such as for example understanding the information carrying properties of quantum channels and memories, randomness extraction, min-entropy sampling, as well as constructing small handheld devices which are suitable for the task of secure identification. [4pt] Full version available at arXiv:0906.1030 (theoretical) and arXiv:0911.2302 (practically oriented).

  12. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  13. Combining neural networks and genetic algorithms for hydrological flow forecasting

    NASA Astrophysics Data System (ADS)

    Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr

    2010-05-01

    We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and predicting relative runoff show the best behavior so far. Utilizing the genetically evolved input filter improves the performance of yet another 5 per cent. In the future we would like to continue with experiments in on-line prediction using real-time data from Smeda River with 6 hours lead time forecast. Following the operational reality we will focus on classification of the runoffs into flood alert levels, and reformulation of the time series prediction task as a classification problem. The main goal of all this work is to improve flood warning system operated by the Czech Hydrometeorological Institute.

  14. Influence Diagram Use With Respect to Technology Planning and Investment

    NASA Technical Reports Server (NTRS)

    Levack, Daniel J. H.; DeHoff, Bryan; Rhodes, Russel E.

    2009-01-01

    Influence diagrams are relatively simple, but powerful, tools for assessing the impact of choices or resource allocations on goals or requirements. They are very general and can be used on a wide range of problems. They can be used for any problem that has defined goals, a set of factors that influence the goals or the other factors, and a set of inputs. Influence diagrams show the relationship among a set of results and the attributes that influence them and the inputs that influence the attributes. If the results are goals or requirements of a program, then the influence diagram can be used to examine how the requirements are affected by changes to technology investment. This paper uses an example to show how to construct and interpret influence diagrams, how to assign weights to the inputs and attributes, how to assign weights to the transfer functions (influences), and how to calculate the resulting influences of the inputs on the results. A study is also presented as an example of how using influence diagrams can help in technology planning and investment. The Space Propulsion Synergy Team (SPST) used this technique to examine the impact of R&D spending on the Life Cycle Cost (LCC) of a space transportation system. The question addressed was the effect on the recurring and the non-recurring portions of LCC of the proportion of R&D resources spent to impact technology objectives versus the proportion spent to impact operational dependability objectives. The goals, attributes, and the inputs were established. All of the linkages (influences) were determined. The weighting of each of the attributes and each of the linkages was determined. Finally the inputs were varied and the impacts on the LCC determined and are presented. The paper discusses how each of these was accomplished both for credibility and as an example for future studies using influence diagrams for technology planning and investment planning.

  15. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  16. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  17. The relative degree enhancement problem for MIMO nonlinear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenwald, D.A.; Oezguener, Ue.

    1995-07-01

    The authors present a result for linearizing a nonlinear MIMO system by employing partial feedback - feedback at all but one input-output channel such that the SISO feedback linearization problem is solvable at the remaining input-output channel. The partial feedback effectively enhances the relative degree at the open input-output channel provided the feedback functions are chosen to satisfy relative degree requirements. The method is useful for nonlinear systems that are not feedback linearizable in a MIMO sense. Several examples are presented to show how these feedback functions can be computed. This strategy can be combined with decentralized observers for amore » completely decentralized feedback linearization result for at least one input-output channel.« less

  18. High input impedance amplifier

    NASA Technical Reports Server (NTRS)

    Kleinberg, Leonard L.

    1995-01-01

    High input impedance amplifiers are provided which reduce the input impedance solely to a capacitive reactance, or, in a somewhat more complex design, provide an extremely high essentially infinite, capacitive reactance. In one embodiment, where the input impedance is reduced in essence, to solely a capacitive reactance, an operational amplifier in a follower configuration is driven at its non-inverting input and a resistor with a predetermined magnitude is connected between the inverting and non-inverting inputs. A second embodiment eliminates the capacitance from the input by adding a second stage to the first embodiment. The second stage is a second operational amplifier in a non-inverting gain-stage configuration where the output of the first follower stage drives the non-inverting input of the second stage and the output of the second stage is fed back to the non-inverting input of the first stage through a capacitor of a predetermined magnitude. These amplifiers, while generally useful, are very useful as sensor buffer amplifiers that may eliminate significant sources of error.

  19. Speed control system for an access gate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bzorgi, Fariborz M

    2012-03-20

    An access control apparatus for an access gate. The access gate typically has a rotator that is configured to rotate around a rotator axis at a first variable speed in a forward direction. The access control apparatus may include a transmission that typically has an input element that is operatively connected to the rotator. The input element is generally configured to rotate at an input speed that is proportional to the first variable speed. The transmission typically also has an output element that has an output speed that is higher than the input speed. The input element and the outputmore » element may rotate around a common transmission axis. A retardation mechanism may be employed. The retardation mechanism is typically configured to rotate around a retardation mechanism axis. Generally the retardation mechanism is operatively connected to the output element of the transmission and is configured to retard motion of the access gate in the forward direction when the first variable speed is above a control-limit speed. In many embodiments the transmission axis and the retardation mechanism axis are substantially co-axial. Some embodiments include a freewheel/catch mechanism that has an input connection that is operatively connected to the rotator. The input connection may be configured to engage an output connection when the rotator is rotated at the first variable speed in a forward direction and configured for substantially unrestricted rotation when the rotator is rotated in a reverse direction opposite the forward direction. The input element of the transmission is typically operatively connected to the output connection of the freewheel/catch mechanism.« less

  20. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  1. Ranking Hearing Aid Input-Output Functions for Understanding Low-, Conversational-, and High-Level Speech in Multitalker Babble

    ERIC Educational Resources Information Center

    Chung, King; Killion, Mead C.; Christensen, Laurel A.

    2007-01-01

    Purpose: To determine the rankings of 6 input-output functions for understanding low-level, conversational, and high-level speech in multitalker babble without manipulating volume control for listeners with normal hearing, flat sensorineural hearing loss, and mildly sloping sensorineural hearing loss. Method: Peak clipping, compression limiting,…

  2. Econometric analysis of fire suppression production functions for large wildland fires

    Treesearch

    Thomas P. Holmes; David E. Calkin

    2013-01-01

    In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...

  3. Models for forecasting energy use in the US farm sector

    NASA Astrophysics Data System (ADS)

    Christensen, L. R.

    1981-07-01

    Econometric models were developed and estimated for the purpose of forecasting electricity and petroleum demand in US agriculture. A structural approach is pursued which takes account of the fact that the quantity demanded of any one input is a decision made in conjunction with other input decisions. Three different functional forms of varying degrees of complexity are specified for the structural cost function, which describes the cost of production as a function of the level of output and factor prices. Demand for materials (all purchased inputs) is derived from these models. A separate model which break this demand up into demand for the four components of materials is used to produce forecasts of electricity and petroleum is a stepwise manner.

  4. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  5. Versatile tunable current-mode universal biquadratic filter using MO-DVCCs and MOSFET-based electronic resistors.

    PubMed

    Chen, Hua-Pin

    2014-01-01

    This paper presents a versatile tunable current-mode universal biquadratic filter with four-input and three-output employing only two multioutput differential voltage current conveyors (MO-DVCCs), two grounded capacitors, and a well-known method for replacement of three grounded resistors by MOSFET-based electronic resistors. The proposed configuration exhibits high-output impedance which is important for easy cascading in the current-mode operations. The proposed circuit can be used as either a two-input three-output circuit or a three-input single-output circuit. In the operation of two-input three-output circuit, the bandpass, highpass, and bandreject filtering responses can be realized simultaneously while the allpass filtering response can be easily obtained by connecting appropriated output current directly without using additional stages. In the operation of three-input single-output circuit, all five generic filtering functions can be easily realized by selecting different three-input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no inverting-type input current signals are imposed. All the passive and active sensitivities are low. Postlayout simulations were carried out to verify the functionality of the design.

  6. Versatile Tunable Current-Mode Universal Biquadratic Filter Using MO-DVCCs and MOSFET-Based Electronic Resistors

    PubMed Central

    2014-01-01

    This paper presents a versatile tunable current-mode universal biquadratic filter with four-input and three-output employing only two multioutput differential voltage current conveyors (MO-DVCCs), two grounded capacitors, and a well-known method for replacement of three grounded resistors by MOSFET-based electronic resistors. The proposed configuration exhibits high-output impedance which is important for easy cascading in the current-mode operations. The proposed circuit can be used as either a two-input three-output circuit or a three-input single-output circuit. In the operation of two-input three-output circuit, the bandpass, highpass, and bandreject filtering responses can be realized simultaneously while the allpass filtering response can be easily obtained by connecting appropriated output current directly without using additional stages. In the operation of three-input single-output circuit, all five generic filtering functions can be easily realized by selecting different three-input current signals. The filter permits orthogonal controllability of the quality factor and resonance angular frequency, and no inverting-type input current signals are imposed. All the passive and active sensitivities are low. Postlayout simulations were carried out to verify the functionality of the design. PMID:24982963

  7. A methodology for formulating a minimal uncertainty model for robust control system design and analysis

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1989-01-01

    In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.

  8. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  9. Filtering data from the collaborative initial glaucoma treatment study for improved identification of glaucoma progression.

    PubMed

    Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C

    2013-12-21

    Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.

  10. Trigeminal, Visceral and Vestibular Inputs May Improve Cognitive Functions by Acting through the Locus Coeruleus and the Ascending Reticular Activating System: A New Hypothesis

    PubMed Central

    De Cicco, Vincenzo; Tramonti Fantozzi, Maria P.; Cataldo, Enrico; Barresi, Massimo; Bruschini, Luca; Faraguna, Ugo; Manzoni, Diego

    2018-01-01

    It is known that sensory signals sustain the background discharge of the ascending reticular activating system (ARAS) which includes the noradrenergic locus coeruleus (LC) neurons and controls the level of attention and alertness. Moreover, LC neurons influence brain metabolic activity, gene expression and brain inflammatory processes. As a consequence of the sensory control of ARAS/LC, stimulation of a sensory channel may potential influence neuronal activity and trophic state all over the brain, supporting cognitive functions and exerting a neuroprotective action. On the other hand, an imbalance of the same input on the two sides may lead to an asymmetric hemispheric excitability, leading to an impairment in cognitive functions. Among the inputs that may drive LC neurons and ARAS, those arising from the trigeminal region, from visceral organs and, possibly, from the vestibular system seem to be particularly relevant in regulating their activity. The trigeminal, visceral and vestibular control of ARAS/LC activity may explain why these input signals: (1) affect sensorimotor and cognitive functions which are not directly related to their specific informational content; and (2) are effective in relieving the symptoms of some brain pathologies, thus prompting peripheral activation of these input systems as a complementary approach for the treatment of cognitive impairments and neurodegenerative disorders. PMID:29358907

  11. Consideration of plant behaviour in optimal servo-compensator design

    NASA Astrophysics Data System (ADS)

    Moase, W. H.; Manzie, C.

    2016-07-01

    Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.

  12. Performances estimation of a rotary traveling wave ultrasonic motor based on two-dimension analytical model.

    PubMed

    Ming, Y; Peiwen, Q

    2001-03-01

    The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.

  13. Cell type-specific long-range connections of basal forebrain circuit.

    PubMed

    Do, Johnny Phong; Xu, Min; Lee, Seung-Hee; Chang, Wei-Cheng; Zhang, Siyu; Chung, Shinjae; Yung, Tyler J; Fan, Jiang Lan; Miyamichi, Kazunari; Luo, Liqun; Dan, Yang

    2016-09-19

    The basal forebrain (BF) plays key roles in multiple brain functions, including sleep-wake regulation, attention, and learning/memory, but the long-range connections mediating these functions remain poorly characterized. Here we performed whole-brain mapping of both inputs and outputs of four BF cell types - cholinergic, glutamatergic, and parvalbumin-positive (PV+) and somatostatin-positive (SOM+) GABAergic neurons - in the mouse brain. Using rabies virus -mediated monosynaptic retrograde tracing to label the inputs and adeno-associated virus to trace axonal projections, we identified numerous brain areas connected to the BF. The inputs to different cell types were qualitatively similar, but the output projections showed marked differences. The connections to glutamatergic and SOM+ neurons were strongly reciprocal, while those to cholinergic and PV+ neurons were more unidirectional. These results reveal the long-range wiring diagram of the BF circuit with highly convergent inputs and divergent outputs and point to both functional commonality and specialization of different BF cell types.

  14. Analysis of nystagmus response to a pseudorandom velocity input

    NASA Technical Reports Server (NTRS)

    Lessard, C. S.

    1986-01-01

    Space motion sickness was not reported during the first Apollo missions; however, since Apollo 8 through the current Shuttle and Skylab missions, approximately 50% of the crewmembers have experienced instances of space motion sickness. Space motion sickness, renamed space adaptation syndrome, occurs primarily during the initial period of a mission until habilation takes place. One of NASA's efforts to resolve the space adaptation syndrome is to model the individual's vestibular response for basis knowledge and as a possible predictor of an individual's susceptibility to the disorder. This report describes a method to analyse the vestibular system when subjected to a pseudorandom angular velocity input. A sum of sinusoids (pseudorandom) input lends itself to analysis by linear frequency methods. Resultant horizontal ocular movements were digitized, filtered and transformed into the frequency domain. Programs were developed and evaluated to obtain the (1) auto spectra of input stimulus and resultant ocular resonse, (2) cross spectra, (3) the estimated vestibular-ocular system transfer function gain and phase, and (4) coherence function between stimulus and response functions.

  15. Improved prescribed performance control for air-breathing hypersonic vehicles with unknown deadzone input nonlinearity.

    PubMed

    Wang, Yingyang; Hu, Jianbo

    2018-05-19

    An improved prescribed performance controller is proposed for the longitudinal model of an air-breathing hypersonic vehicle (AHV) subject to uncertain dynamics and input nonlinearity. Different from the traditional non-affine model requiring non-affine functions to be differentiable, this paper utilizes a semi-decomposed non-affine model with non-affine functions being locally semi-bounded and possibly in-differentiable. A new error transformation combined with novel prescribed performance functions is proposed to bypass complex deductions caused by conventional error constraint approaches and circumvent high frequency chattering in control inputs. On the basis of backstepping technique, the improved prescribed performance controller with low structural and computational complexity is designed. The methodology guarantees the altitude and velocity tracking error within transient and steady state performance envelopes and presents excellent robustness against uncertain dynamics and deadzone input nonlinearity. Simulation results demonstrate the efficacy of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Design of High Quality Chemical XOR Gates with Noise Reduction.

    PubMed

    Wood, Mackenna L; Domanskyi, Sergii; Privman, Vladimir

    2017-07-05

    We describe a chemical XOR gate design that realizes gate-response function with filtering properties. Such gate-response function is flat (has small gradients) at and in the vicinity of all the four binary-input logic points, resulting in analog noise suppression. The gate functioning involves cross-reaction of the inputs represented by pairs of chemicals to produce a practically zero output when both are present and nearly equal. This cross-reaction processing step is also designed to result in filtering at low output intensities by canceling out the inputs if one of the latter has low intensity compared with the other. The remaining inputs, which were not reacted away, are processed to produce the output XOR signal by chemical steps that result in filtering at large output signal intensities. We analyze the tradeoff resulting from filtering, which involves loss of signal intensity. We also discuss practical aspects of realizations of such XOR gates. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Functional materials discovery using energy-structure-function maps

    NASA Astrophysics Data System (ADS)

    Pulido, Angeles; Chen, Linjiang; Kaczorowski, Tomasz; Holden, Daniel; Little, Marc A.; Chong, Samantha Y.; Slater, Benjamin J.; McMahon, David P.; Bonillo, Baltasar; Stackhouse, Chloe J.; Stephenson, Andrew; Kane, Christopher M.; Clowes, Rob; Hasell, Tom; Cooper, Andrew I.; Day, Graeme M.

    2017-03-01

    Molecular crystals cannot be designed in the same manner as macroscopic objects, because they do not assemble according to simple, intuitive rules. Their structures result from the balance of many weak interactions, rather than from the strong and predictable bonding patterns found in metal-organic frameworks and covalent organic frameworks. Hence, design strategies that assume a topology or other structural blueprint will often fail. Here we combine computational crystal structure prediction and property prediction to build energy-structure-function maps that describe the possible structures and properties that are available to a candidate molecule. Using these maps, we identify a highly porous solid, which has the lowest density reported for a molecular crystal so far. Both the structure of the crystal and its physical properties, such as methane storage capacity and guest-molecule selectivity, are predicted using the molecular structure as the only input. More generally, energy-structure-function maps could be used to guide the experimental discovery of materials with any target function that can be calculated from predicted crystal structures, such as electronic structure or mechanical properties.

  18. Functional materials discovery using energy-structure-function maps.

    PubMed

    Pulido, Angeles; Chen, Linjiang; Kaczorowski, Tomasz; Holden, Daniel; Little, Marc A; Chong, Samantha Y; Slater, Benjamin J; McMahon, David P; Bonillo, Baltasar; Stackhouse, Chloe J; Stephenson, Andrew; Kane, Christopher M; Clowes, Rob; Hasell, Tom; Cooper, Andrew I; Day, Graeme M

    2017-03-30

    Molecular crystals cannot be designed in the same manner as macroscopic objects, because they do not assemble according to simple, intuitive rules. Their structures result from the balance of many weak interactions, rather than from the strong and predictable bonding patterns found in metal-organic frameworks and covalent organic frameworks. Hence, design strategies that assume a topology or other structural blueprint will often fail. Here we combine computational crystal structure prediction and property prediction to build energy-structure-function maps that describe the possible structures and properties that are available to a candidate molecule. Using these maps, we identify a highly porous solid, which has the lowest density reported for a molecular crystal so far. Both the structure of the crystal and its physical properties, such as methane storage capacity and guest-molecule selectivity, are predicted using the molecular structure as the only input. More generally, energy-structure-function maps could be used to guide the experimental discovery of materials with any target function that can be calculated from predicted crystal structures, such as electronic structure or mechanical properties.

  19. Functional transformations of odor inputs in the mouse olfactory bulb.

    PubMed

    Adam, Yoav; Livneh, Yoav; Miyamichi, Kazunari; Groysman, Maya; Luo, Liqun; Mizrahi, Adi

    2014-01-01

    Sensory inputs from the nasal epithelium to the olfactory bulb (OB) are organized as a discrete map in the glomerular layer (GL). This map is then modulated by distinct types of local neurons and transmitted to higher brain areas via mitral and tufted cells. Little is known about the functional organization of the circuits downstream of glomeruli. We used in vivo two-photon calcium imaging for large scale functional mapping of distinct neuronal populations in the mouse OB, at single cell resolution. Specifically, we imaged odor responses of mitral cells (MCs), tufted cells (TCs) and glomerular interneurons (GL-INs). Mitral cells population activity was heterogeneous and only mildly correlated with the olfactory receptor neuron (ORN) inputs, supporting the view that discrete input maps undergo significant transformations at the output level of the OB. In contrast, population activity profiles of TCs were dense, and highly correlated with the odor inputs in both space and time. Glomerular interneurons were also highly correlated with the ORN inputs, but showed higher activation thresholds suggesting that these neurons are driven by strongly activated glomeruli. Temporally, upon persistent odor exposure, TCs quickly adapted. In contrast, both MCs and GL-INs showed diverse temporal response patterns, suggesting that GL-INs could contribute to the transformations MCs undergo at slow time scales. Our data suggest that sensory odor maps are transformed by TCs and MCs in different ways forming two distinct and parallel information streams.

  20. On the effects of multimodal information integration in multitasking.

    PubMed

    Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian

    2017-07-07

    There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).

  1. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  2. Neural pulse frequency modulation of an exponentially correlated Gaussian process

    NASA Technical Reports Server (NTRS)

    Hutchinson, C. E.; Chon, Y.-T.

    1976-01-01

    The effect of NPFM (Neural Pulse Frequency Modulation) on a stationary Gaussian input, namely an exponentially correlated Gaussian input, is investigated with special emphasis on the determination of the average number of pulses in unit time, known also as the average frequency of pulse occurrence. For some classes of stationary input processes where the formulation of the appropriate multidimensional Markov diffusion model of the input-plus-NPFM system is possible, the average impulse frequency may be obtained by a generalization of the approach adopted. The results are approximate and numerical, but are in close agreement with Monte Carlo computer simulation results.

  3. Design Sensitivity Method for Sampling-Based RBDO with Fixed COV

    DTIC Science & Technology

    2015-04-29

    contours of the input model at initial design d0 and RBDO optimum design dopt are shown. As the limit state functions are not linear and some input...Glasser, M. L., Moore, R. A., and Scott, T. C., 1990, "Evaluation of Classes of Definite Integrals Involving Elementary Functions via...Differentiation of Special Functions," Applicable Algebra in Engineering, Communication and Computing, 1(2), pp. 149-165. [25] Cho, H., Bae, S., Choi, K. K

  4. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.

  5. Input and Intake in Language Acquisition

    ERIC Educational Resources Information Center

    Gagliardi, Ann C.

    2012-01-01

    This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…

  6. Performance Prediction Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less

  7. General purpose bioamplifier study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Based on known inputs and outputs, a set of specifications were developed for the major characteristics of a general purpose amplifier for use in the Integrated Medical, Behaviorial, and Laboratory Measurement System.

  8. Proposal of digital interface for the system of the air conditioner's remote control: analysis of the system of feedback.

    PubMed

    da Silva de Queiroz Pierre, Raisa; Kawada, Tarô Arthur Tavares; Fontes, André Guimarães

    2012-01-01

    Develop a proposal of digital interface for the system of the remote control, that functions as support system during the manipulation of air conditioner adjusted for the users in general, from ergonomic parameters, objectifying the reduction of the problems faced for the user and improving the process. 20 people with questionnaire with both qualitative and quantitative level. Linear Method consists of a sequence of steps in which the input of one of them depends on the output from the previous one, although they are independent. The process of feedback, when necessary, must occur within each step separately.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delcamp, E.; Lagarde, B.; Polack, F.

    Though optimization softwares are commonly used in visible optical design, none seems to exist for soft X-ray optics. It is shown here that optimization techniques can be applied with some advantages to X-UV monochromator design. A merit function, suitable for minimizing the aberrations is proposed, and the general method of computation is described. Samples of the software inputs and outputs are presented, and compared to reference data. As an example of application to soft X-ray monochromator design, the optimization of the soft X-ray monochromator of the ESRF microscopy beamline is presented. Good agreement between the predicted resolution of a modifiedmore » PGM monochromator and experimental measurements is reported.« less

  10. Scientific approaches to science policy.

    PubMed

    Berg, Jeremy M

    2013-11-01

    The development of robust science policy depends on use of the best available data, rigorous analysis, and inclusion of a wide range of input. While director of the National Institute of General Medical Sciences (NIGMS), I took advantage of available data and emerging tools to analyze training time distribution by new NIGMS grantees, the distribution of the number of publications as a function of total annual National Institutes of Health support per investigator, and the predictive value of peer-review scores on subsequent scientific productivity. Rigorous data analysis should be used to develop new reforms and initiatives that will help build a more sustainable American biomedical research enterprise.

  11. Sequence invariant state machines

    NASA Technical Reports Server (NTRS)

    Whitaker, S.; Manjunath, S.

    1990-01-01

    A synthesis method and new VLSI architecture are introduced to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. A design method is proposed that utilizes BTS logic to implement regular and dense circuits. A given state sequence can be programmed with power supply connections or dynamically reallocated if stored in a register. Arbitrary flow table sequences can be modified or programmed to dynamically alter the function of the machine. This allows VLSI controllers to be designed with the programmability of a general purpose processor but with the compact size and performance of dedicated logic.

  12. Communication practices and preferences between orthodontists and general dentists.

    PubMed

    Bibona, Kevin; Shroff, Bhavna; Best, Al M; Lindauer, Steven J

    2015-11-01

    To evaluate similarities and differences in orthodontists' and general dentists' perceptions regarding their interdisciplinary communication. Orthodontists (N  =  137) and general dentists (N  =  144) throughout the United States responded to an invitation to participate in a Web-based and mailed survey, respectively. The results indicated that orthodontists communicated with general dentists using the type of media general dentists preferred to use. As treatment complexity increased, orthodontists shifted from one-way forms of communication (letters) to two-way forms of communication (phone calls; P < .05). Both orthodontists and general dentists reported that orthodontists' communication regarding white spot lesions was inadequate. When treating patients with missing or malformed teeth, orthodontists reported that they sought input from the general dentists at a higher rate than the general dentists reported (P < .005). Orthodontists' and general dentists' perceptions of how often specific types of media were used for interdisciplinary communication were generally similar. They differed, however, with regard to how adequately orthodontists communicated with general dentists and how often orthodontists sought input from general dentists. The methods and extent of communication between orthodontists and general dentists need to be determined on a patient-by-patient basis.

  13. Inverse optimal design of input-to-state stabilisation for affine nonlinear systems with input delays

    NASA Astrophysics Data System (ADS)

    Cai, Xiushan; Meng, Lingxin; Zhang, Wei; Liu, Leipo

    2018-03-01

    We establish robustness of the predictor feedback control law to perturbations appearing at the system input for affine nonlinear systems with time-varying input delay and additive disturbances. Furthermore, it is shown that it is inverse optimal with respect to a differential game problem. All of the stability and inverse optimality proofs are based on the infinite-dimensional backstepping transformation and an appropriate Lyapunov functional. A single-link manipulator subject to input delays and disturbances is given to illustrate the validity of the proposed method.

  14. An open source digital servo for atomic, molecular, and optical physics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leibrandt, D. R., E-mail: david.leibrandt@nist.gov; Heidecker, J.

    2015-12-15

    We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of themore » laser used to probe the narrow clock transition of {sup 27}Al{sup +} in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.« less

  15. An open source digital servo for atomic, molecular, and optical physics experiments.

    PubMed

    Leibrandt, D R; Heidecker, J

    2015-12-01

    We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of (27)Al(+) in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.

  16. Around Marshall

    NASA Image and Video Library

    1993-09-15

    Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall SPace Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).

  17. Around Marshall

    NASA Image and Video Library

    1993-12-15

    Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall Spce Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).

  18. Entropy generation across Earth's collisionless bow shock.

    PubMed

    Parks, G K; Lee, E; McCarthy, M; Goldstein, M; Fu, S Y; Cao, J B; Canu, P; Lin, N; Wilber, M; Dandouras, I; Réme, H; Fazakerley, A

    2012-02-10

    Earth's bow shock is a collisionless shock wave but entropy has never been directly measured across it. The plasma experiments on Cluster and Double Star measure 3D plasma distributions upstream and downstream of the bow shock allowing calculation of Boltzmann's entropy function H and his famous H theorem, dH/dt≤0. The collisionless Boltzmann (Vlasov) equation predicts that the total entropy does not change if the distribution function across the shock becomes nonthermal, but it allows changes in the entropy density. Here, we present the first direct measurements of entropy density changes across Earth's bow shock and show that the results generally support the model of the Vlasov analysis. These observations are a starting point for a more sophisticated analysis that includes 3D computer modeling of collisionless shocks with input from observed particles, waves, and turbulences.

  19. An open source digital servo for atomic, molecular, and optical physics experiments

    NASA Astrophysics Data System (ADS)

    Leibrandt, D. R.; Heidecker, J.

    2015-12-01

    We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of 27Al+ in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser.

  20. An open source digital servo for atomic, molecular, and optical physics experiments

    PubMed Central

    Leibrandt, D. R.; Heidecker, J.

    2016-01-01

    We describe a general purpose digital servo optimized for feedback control of lasers in atomic, molecular, and optical physics experiments. The servo is capable of feedback bandwidths up to roughly 1 MHz (limited by the 320 ns total latency); loop filter shapes up to fifth order; multiple-input, multiple-output control; and automatic lock acquisition. The configuration of the servo is controlled via a graphical user interface, which also provides a rudimentary software oscilloscope and tools for measurement of system transfer functions. We illustrate the functionality of the digital servo by describing its use in two example scenarios: frequency control of the laser used to probe the narrow clock transition of 27Al+ in an optical atomic clock, and length control of a cavity used for resonant frequency doubling of a laser. PMID:26724014

  1. Designable DNA-binding domains enable construction of logic circuits in mammalian cells.

    PubMed

    Gaber, Rok; Lebar, Tina; Majerle, Andreja; Šter, Branko; Dobnikar, Andrej; Benčina, Mojca; Jerala, Roman

    2014-03-01

    Electronic computer circuits consisting of a large number of connected logic gates of the same type, such as NOR, can be easily fabricated and can implement any logic function. In contrast, designed genetic circuits must employ orthogonal information mediators owing to free diffusion within the cell. Combinatorial diversity and orthogonality can be provided by designable DNA- binding domains. Here, we employed the transcription activator-like repressors to optimize the construction of orthogonal functionally complete NOR gates to construct logic circuits. We used transient transfection to implement all 16 two-input logic functions from combinations of the same type of NOR gates within mammalian cells. Additionally, we present a genetic logic circuit where one input is used to select between an AND and OR function to process the data input using the same circuit. This demonstrates the potential of designable modular transcription factors for the construction of complex biological information-processing devices.

  2. Selective Attention, Working Memory, and Executive Function as Potential Independent Sources of Cognitive Dysfunction in Schizophrenia.

    PubMed

    Gold, James M; Robinson, Benjamin; Leonard, Carly J; Hahn, Britta; Chen, Shuo; McMahon, Robert P; Luck, Steven J

    2017-11-11

    People with schizophrenia demonstrate impairments in selective attention, working memory, and executive function. Given the overlap in these constructs, it is unclear if these represent distinct impairments or different manifestations of one higher-order impairment. To examine this question, we administered tasks from the basic cognitive neuroscience literature to measure visual selective attention, working memory capacity, and executive function in 126 people with schizophrenia and 122 healthy volunteers. Patients demonstrated deficits on all tasks with the exception of selective attention guided by strong bottom-up inputs. Although the measures of top-down control of selective attention, working memory, and executive function were all intercorrelated, several sources of evidence indicate that working memory and executive function are separate sources of variance. Specifically, both working memory and executive function independently contributed to the discrimination of group status and independently accounted for variance in overall general cognitive ability as assessed by the MATRICS battery. These two cognitive functions appear to be separable features of the cognitive impairments observed in schizophrenia. © The Author 2017. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  3. Taxonomic and functional patterns of macrobenthic communities on a high-Arctic shelf: A case study from the Laptev Sea

    NASA Astrophysics Data System (ADS)

    Kokarev, V. N.; Vedenin, A. A.; Basin, A. B.; Azovsky, A. I.

    2017-11-01

    The studies of functional structure of high-Arctic Ecosystems are scarce. We used data on benthic macrofauna from 500-km latitudinal transect in the eastern Laptev Sea, from the Lena delta to the continental shelf break, to describe spatial patterns in species composition, taxonomic and functional structure in relation to environmental factors. Both taxonomy-based approach and Biological Trait analysis yielded similar results and showed general depth-related gradient in benthic diversity and composition. This congruence between taxonomical and functional dimensions of community organization suggests that the same environmental factors (primarily riverine input and regime of sedimentation) have similar effect on both community structure and functioning. BTA also revealed a distinct functional structure of stations situated at the Eastern Lena valley, with dominance of motile, burrowing sub-surface deposit-feeders and absence of sedentary tube-dwelling forms. The overall spatial distribution of benthic assemblages corresponds well to that described there in preceding decades, evidencing the long-term stability of bottom ecosystem. Strong linear relationship between species and traits diversity, however, indicates low functional redundancy, which potentially makes the ecosystem susceptible to a species loss or structural shifts.

  4. User's manual for a parameter identification technique. [with options for model simulation for fixed input forcing functions and identification from wind tunnel and flight measurements

    NASA Technical Reports Server (NTRS)

    Kanning, G.

    1975-01-01

    A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.

  5. Input and output constraints-based stabilisation of switched nonlinear systems with unstable subsystems and its application

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Liu, Qian; Zhao, Jun

    2018-01-01

    This paper studies the problem of stabilisation of switched nonlinear systems with output and input constraints. We propose a recursive approach to solve this issue. None of the subsystems are assumed to be stablisable while the switched system is stabilised by dual design of controllers for subsystems and a switching law. When only dealing with bounded input, we provide nested switching controllers using an extended backstepping procedure. If both input and output constraints are taken into consideration, a Barrier Lyapunov Function is employed during operation to construct multiple Lyapunov functions for switched nonlinear system in the backstepping procedure. As a practical example, the control design of an equilibrium manifold expansion model of aero-engine is given to demonstrate the effectiveness of the proposed design method.

  6. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    NASA Astrophysics Data System (ADS)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression modeling does not resolve significant effects due to site class at frequencies greater than approximately 5 Hz. Disaggregation of general seismic hazard models using Vsbea indicates that the modal magnitudes for the higher frequency oscillators tend to be larger, and vary less with oscillator frequency, than those derived using PSV. Insofar as the elastic input energy may be a better parameter for quantifying the damage potential of ground motion, its use in probabilistic seismic hazard analysis could provide an improved means for selecting earthquake scenarios and establishing design earthquakes for many types of engineering analyses.

  7. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  8. Viewing brain processes as Critical State Transitions across levels of organization: Neural events in Cognition and Consciousness, and general principles.

    PubMed

    Werner, Gerhard

    2009-04-01

    In this theoretical and speculative essay, I propose that insights into certain aspects of neural system functions can be gained from viewing brain function in terms of the branch of Statistical Mechanics currently referred to as "Modern Critical Theory" [Stanley, H.E., 1987. Introduction to Phase Transitions and Critical Phenomena. Oxford University Press; Marro, J., Dickman, R., 1999. Nonequilibrium Phase Transitions in Lattice Models. Cambridge University Press, Cambridge, UK]. The application of this framework is here explored in two stages: in the first place, its principles are applied to state transitions in global brain dynamics, with benchmarks of Cognitive Neuroscience providing the relevant empirical reference points. The second stage generalizes to suggest in more detail how the same principles could also apply to the relation between other levels of the structural-functional hierarchy of the nervous system and between neural assemblies. In this view, state transitions resulting from the processing at one level are the input to the next, in the image of a 'bucket brigade', with the content of each bucket being passed on along the chain, after having undergone a state transition. The unique features of a process of this kind will be discussed and illustrated.

  9. Inhibitory dendrite dynamics as a general feature of the adult cortical microcircuit.

    PubMed

    Chen, Jerry L; Flanders, Genevieve H; Lee, Wei-Chung Allen; Lin, Walter C; Nedivi, Elly

    2011-08-31

    The mammalian neocortex is functionally subdivided into architectonically distinct regions that process various types of information based on their source of afferent input. Yet, the modularity of neocortical organization in terms of cell type and intrinsic circuitry allows afferent drive to continuously reassign cortical map space. New aspects of cortical map plasticity include dynamic turnover of dendritic spines on pyramidal neurons and remodeling of interneuron dendritic arbors. While spine remodeling occurs in multiple cortical regions, it is not yet known whether interneuron dendrite remodeling is common across primary sensory and higher-level cortices. It is also unknown whether, like pyramidal dendrites, inhibitory dendrites respect functional domain boundaries. Given the importance of the inhibitory circuitry to adult cortical plasticity and the reorganization of cortical maps, we sought to address these questions by using two-photon microscopy to monitor interneuron dendritic arbors of thy1-GFP-S transgenic mice expressing GFP in neurons sparsely distributed across the superficial layers of the neocortex. We find that interneuron dendritic branch tip remodeling is a general feature of the adult cortical microcircuit, and that remodeling rates are similar across primary sensory regions of different modalities, but may differ in magnitude between primary sensory versus higher cortical areas. We also show that branch tip remodeling occurs in bursts and respects functional domain boundaries.

  10. Multifactorial determinants of cognition — Thyroid function is not the only one

    PubMed Central

    Moncayo, Roy; Ortner, Karina

    2015-01-01

    Background Since the 1960s hypothyroidism together with iodine deficiency have been considered to be a principal determinant of cognition development. Following iodine supplementation programs and improved treatment options for hypothyroidism this relation might not be valid in 2015. On the other hand neurosciences have added different inputs also related to cognition. Scope of review We will examine the characteristics of the original and current publications on thyroid function and cognition and also add some general determinants of intelligence and cognition. One central issue for us is the relation of stress to cognition knowing that both physical and psychological stress, are frequent elements in subjects with thyroid dysfunction. We have considered a special type of stress called pre-natal stress which can influence cognitive functions. Fear and anxiety can be intermingled requiring mechanisms of fear extinction. Major conclusions Recent studies have failed to show an influence of thyroid medication during pregnancy on intellectual development. Neuroscience offers a better explanation of cognition than hypothyroidism and iodine deficiency. Additional factors relevant to cognition are nutrition, infection, prenatal stress, and early life stress. In turn stress is related to low magnesium levels. Magnesium supplementation can correct both latent hypothyroidism and acquired mild cognitive deficits. General significance Cognition is a complex process that depends on many determinants and not only on thyroid function. Magnesium deficiency appears to be a basic mechanism for changes in thyroid function as well as of cognition. PMID:26672993

  11. The Influence of Prosodic Input in the Second Language Classroom: Does It Stimulate Child Acquisition of Word Order and Function Words?

    ERIC Educational Resources Information Center

    Campfield, Dorota E.; Murphy, Victoria A.

    2017-01-01

    This paper reports on an intervention study with young Polish beginners (mean age: 8 years, 3 months) learning English at school. It seeks to identify whether exposure to rhythmic input improves knowledge of word order and function words. The "prosodic bootstrapping hypothesis", relevant in developmental psycholinguistics, provided the…

  12. Full wave modulator-demodulator amplifier apparatus. [for generating rectified output signal

    NASA Technical Reports Server (NTRS)

    Black, J. M. (Inventor)

    1974-01-01

    A full-wave modulator-demodulator apparatus is described including an operational amplifier having a first input terminal coupled to a circuit input terminal, and a second input terminal alternately coupled to the circuit input terminal. A circuit is ground by a switching circuit responsive to a phase reference signal and the operational amplifier is alternately switched between a non-inverting mode and an inverting mode. The switching circuit includes three field-effect transistors operatively associated to provide the desired switching function in response to an alternating reference signal of the same frequency as an AC input signal applied to the circuit input terminal.

  13. General mechanism for the 1 /f noise

    NASA Astrophysics Data System (ADS)

    Yadav, Avinash Chand; Ramaswamy, Ramakrishna; Dhar, Deepak

    2017-08-01

    We consider the response of a memoryless nonlinear device that acts instantaneously, converting an input signal ξ (t ) into an output η (t ) at the same time t . For input Gaussian noise with power-spectrum 1 /fα , the nonlinearity can modify the spectral index of the output to give a spectrum that varies as 1 /fα ' with α'≠α . We show that the value of α' depends on the nonlinear transformation and can be tuned continuously. This provides a general mechanism for the ubiquitous 1 /f noise found in nature.

  14. Estimating net rainfall, evaporation and water storage of a bare soil from sequential L-band emissivities

    NASA Technical Reports Server (NTRS)

    Stroosnijder, L.; Lascano, R. J.; Newton, R. W.; Vanbavel, C. H. M.

    1984-01-01

    A general method to use a time series of L-band emissivities as an input to a hydrological model for continuously monitoring the net rainfall and evaporation as well as the water content over the entire soil profile is proposed. The model requires a sufficiently accurate and general relation between soil emissivity and surface moisture content. A model which requires the soil hydraulic properties as an additional input, but does not need any weather data was developed. The method is shown to be numerically consistent.

  15. Distributional Effects and Individual Differences in L2 Morphology Learning

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Kwoka, Nicole; Kempe, Vera

    2017-01-01

    Second language (L2) learning outcomes may depend on the structure of the input and learners' cognitive abilities. This study tested whether less predictable input might facilitate learning and generalization of L2 morphology while evaluating contributions of statistical learning ability, nonverbal intelligence, phonological short-term memory, and…

  16. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  17. Numerical Function Generators Using LUT Cascades

    DTIC Science & Technology

    2007-06-01

    either algebraically (for example, sinðxÞ) or as a table of input/ output values. The user defines the numerical function by using the syntax of Scilab ...defined function in Scilab or specify it directly. Note that, by changing the parser of our system, any format can be used for the design entry. First...Methods for Multiple-Valued Input Address Generators,” Proc. 36th IEEE Int’l Symp. Multiple-Valued Logic (ISMVL ’06), May 2006. [29] Scilab 3.0, INRIA-ENPC

  18. Quantum theory of multiple-input-multiple-output Markovian feedback with diffusive measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chia, A.; Wiseman, H. M.

    2011-07-15

    Feedback control engineers have been interested in multiple-input-multiple-output (MIMO) extensions of single-input-single-output (SISO) results of various kinds due to its rich mathematical structure and practical applications. An outstanding problem in quantum feedback control is the extension of the SISO theory of Markovian feedback by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)] to multiple inputs and multiple outputs. Here we generalize the SISO homodyne-mediated feedback theory to allow for multiple inputs, multiple outputs, and arbitrary diffusive quantum measurements. We thus obtain a MIMO framework which resembles the SISO theory and whose additional mathematical structure is highlighted by the extensivemore » use of vector-operator algebra.« less

  19. Effect of Increased Intensity of Physiotherapy on Patient Outcomes After Stroke: An Economic Literature Review and Cost-Effectiveness Analysis

    PubMed Central

    Chan, B

    2015-01-01

    Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241

  20. An optimal general type-2 fuzzy controller for Urban Traffic Network.

    PubMed

    Khooban, Mohammad Hassan; Vafamand, Navid; Liaghat, Alireza; Dragicevic, Tomislav

    2017-01-01

    Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the Traffic Information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    NASA Astrophysics Data System (ADS)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  2. Hydrogeologic controls on summer stream temperatures in the McKenzie River basin, Oregon

    Treesearch

    Christina Tague; Michael Farrell; Gordon Grant; Sarah Lewis; Serge Rey

    2007-01-01

    Stream temperature is a complex function of energy inputs including solar radiation and latent and sensible heat transfer. In streams where groundwater inputs are significant, energy input through advection can also be an important control on stream temperature. For an individual stream reach, models of stream temperature can take advantage of direct measurement or...

  3. Life cycle efficiency of beef production: II. Relationship of cow efficiency ratios to traits of the dam and progeny weaned.

    PubMed

    Davis, M E; Rutledge, J J; Cundiff, L V; Hauser, E R

    1983-10-01

    Several measures of life cycle cow efficiency were calculated using weights and individual feed consumptions recorded on 160 dams of beef, dairy and beef X dairy breeding and their progeny. Ratios of output to input were used to estimate efficiency, where outputs included weaning weights of progeny plus salvage value of the dam and inputs included creep feed consumed by progeny plus feed consumed by the dam over her entire lifetime. In one approach to estimating efficiency, inputs and outputs were weighted by probabilities that were a function of the cow herd age distribution and percentage calf crop in a theoretical herd. The second approach to estimating cow efficiency involved dividing the sum of the weights by the sum of the feed consumption values, with all pieces of information being given equal weighting. Relationships among efficiency estimates and various traits of dams and progeny were examined. Weights, heights, and weight:height ratios of dams at 240 d of age were not correlated significantly with subsequent efficiency of calf production, indicating that indirect selection for lifetime cow efficiency at an early age based on these traits would be ineffective. However, females exhibiting more efficient weight gains from 240 d to first calving tended to become more efficient dams. Correlations of efficiency with weight of dam at calving and at weaning were negative and generally highly significant. Height at withers was negatively related to efficiency. Ratio of weight to height indicated that fatter dams generally were less efficient. The effect of milk production on efficiency depended upon the breed combinations involved. Dams calving for the first time at an early age and continuing to calve at short intervals were superior in efficiency. Weaning rate was closely related to life cycle efficiency. Large negative correlations between efficiency and feed consumption of dams were observed, while correlations of efficiency with progeny weights and feed consumptions in individual parities tended to be positive though nonsignificant. However, correlations of efficiency with accumulative progeny weights and feed consumptions generally were significant.

  4. Separation of input function for rapid measurement of quantitative CMRO2 and CBF in a single PET scan with a dual tracer administration method

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Iida, Hidehiro

    2007-04-01

    Cerebral metabolic rate of oxygen (CMRO2), oxygen extraction fraction (OEF) and cerebral blood flow (CBF) images can be quantified using positron emission tomography (PET) by administrating 15O-labelled water (H152O) and oxygen (15O2). Conventionally, those images are measured with separate scans for three tracers C15O for CBV, H152O for CBF and 15O2 for CMRO2, and there are additional waiting times between the scans in order to minimize the influence of the radioactivity from the previous tracers, which results in a relatively long study period. We have proposed a dual tracer autoradiographic (DARG) approach (Kudomi et al 2005), which enabled us to measure CBF, OEF and CMRO2 rapidly by sequentially administrating H152O and 15O2 within a short time. Because quantitative CBF and CMRO2 values are sensitive to arterial input function, it is necessary to obtain accurate input function and a drawback of this approach is to require separation of the measured arterial blood time-activity curve (TAC) into pure water and oxygen input functions under the existence of residual radioactivity from the first injected tracer. For this separation, frequent manual sampling was required. The present paper describes two calculation methods: namely a linear and a model-based method, to separate the measured arterial TAC into its water and oxygen components. In order to validate these methods, we first generated a blood TAC for the DARG approach by combining the water and oxygen input functions obtained in a series of PET studies on normal human subjects. The combined data were then separated into water and oxygen components by the present methods. CBF and CMRO2 were calculated using those separated input functions and tissue TAC. The quantitative accuracy in the CBF and CMRO2 values by the DARG approach did not exceed the acceptable range, i.e., errors in those values were within 5%, when the area under the curve in the input function of the second tracer was larger than half of the first one. Bias and deviation in those values were also compatible to that of the conventional method, when noise was imposed on the arterial TAC. We concluded that the present calculation based methods could be of use for quantitatively calculating CBF and CMRO2 with the DARG approach.

  5. Linkage mechanisms in the vertebrate skull: Structure and function of three-dimensional, parallel transmission systems.

    PubMed

    Olsen, Aaron M; Westneat, Mark W

    2016-12-01

    Many musculoskeletal systems, including the skulls of birds, fishes, and some lizards consist of interconnected chains of mobile skeletal elements, analogous to linkage mechanisms used in engineering. Biomechanical studies have applied linkage models to a diversity of musculoskeletal systems, with previous applications primarily focusing on two-dimensional linkage geometries, bilaterally symmetrical pairs of planar linkages, or single four-bar linkages. Here, we present new, three-dimensional (3D), parallel linkage models of the skulls of birds and fishes and use these models (available as free kinematic simulation software), to investigate structure-function relationships in these systems. This new computational framework provides an accessible and integrated workflow for exploring the evolution of structure and function in complex musculoskeletal systems. Linkage simulations show that kinematic transmission, although a suitable functional metric for linkages with single rotating input and output links, can give misleading results when applied to linkages with substantial translational components or multiple output links. To take into account both linear and rotational displacement we define force mechanical advantage for a linkage (analogous to lever mechanical advantage) and apply this metric to measure transmission efficiency in the bird cranial mechanism. For linkages with multiple, expanding output points we propose a new functional metric, expansion advantage, to measure expansion amplification and apply this metric to the buccal expansion mechanism in fishes. Using the bird cranial linkage model, we quantify the inaccuracies that result from simplifying a 3D geometry into two dimensions. We also show that by combining single-chain linkages into parallel linkages, more links can be simulated while decreasing or maintaining the same number of input parameters. This generalized framework for linkage simulation and analysis can accommodate linkages of differing geometries and configurations, enabling novel interpretations of the mechanics of force transmission across a diversity of vertebrate feeding mechanisms and enhancing our understanding of musculoskeletal function and evolution. J. Morphol. 277:1570-1583, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Advanced Actuation Systems Development. Volume 2

    DTIC Science & Technology

    1989-08-01

    and unloaded performance characteristics of a test specimen produced by General Dynamics Corporation as a feasibility model. The actuation system for...changing the camber of the test specimen is unique and was evaluated with a series of input/output measurements. The testing verified the general ...MAWS General ’rest Procedure........................................6 General Performance Measurements .................................... 10 Test

  7. Branch Input Resistance and Steady Attenuation for Input to One Branch of a Dendritic Neuron Model

    PubMed Central

    Rall, Wilfrid; Rinzel, John

    1973-01-01

    Mathematical solutions and numerical illustrations are presented for the steady-state distribution of membrane potential in an extensively branched neuron model, when steady electric current is injected into only one dendritic branch. Explicit expressions are obtained for input resistance at the branch input site and for voltage attenuation from the input site to the soma; expressions for AC steady-state input impedance and attenuation are also presented. The theoretical model assumes passive membrane properties and the equivalent cylinder constraint on branch diameters. Numerical examples illustrate how branch input resistance and steady attenuation depend upon the following: the number of dendritic trees, the orders of dendritic branching, the electrotonic length of the dendritic trees, the location of the dendritic input site, and the input resistance at the soma. The application to cat spinal motoneurons, and to other neuron types, is discussed. The effect of a large dendritic input resistance upon the amount of local membrane depolarization at the synaptic site, and upon the amount of depolarization reaching the soma, is illustrated and discussed; simple proportionality with input resistance does not hold, in general. Also, branch input resistance is shown to exceed the input resistance at the soma by an amount that is always less than the sum of core resistances along the path from the input site to the soma. PMID:4715583

  8. Hierarchical resilience with lightweight threads.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, Kyle Bruce

    2011-10-01

    This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less

  9. What Can Quantum Optics Say about Computational Complexity Theory?

    NASA Astrophysics Data System (ADS)

    Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.

    2015-02-01

    Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.

  10. AC coupled three op-amp biopotential amplifier with active DC suppression.

    PubMed

    Spinelli, E M; Mayosky, M A

    2000-12-01

    A three op-amps instrumentation amplifier (I.A) with active dc suppression is presented. dc suppression is achieved by means of a controlled floating source at the input stage, to compensate electrode and op-amps offset voltages. This isolated floating source is built around an optical-isolated device using a general-purpose optocoupler, working as a photovoltaic generator. The proposed circuit has many interesting characteristics regarding simplicity and cost, while preserving common mode rejection ratio (CMRR) and high input impedance characteristics of the classic three op-amps I.A. As an example, a biopotential amplifier with a gain of 80 dB, a lower cutoff frequency of 0.1 Hz, and a dc input range of +/- 8 mV was built and tested. Using general-purpose op-amps, a CMRR of 105 was achieved without trimmings.

  11. Further Evaluations of Collateral Damage

    DTIC Science & Technology

    1978-09-29

    delivered Rockeye weapons. Basic input data are taken from JMEM. The AIDA model is used for various numbers of Rockeyes to determine the number associated...TANDEM-C data base was further processed to provide population data in square cells 250m on a side. This data base can be directly input into the AIDA ... model and can be modified for input to M1JHM and RBM. Figure 1 gives the general area with town outlines, P-95 circles and population data and Figure 2

  12. Direct Current Amplifier. Report No. 92; AMPLIFICADOR DE CORRIENTE CONTINUA. Informe No. 92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marazzi, C.

    1963-01-01

    A direct-current amplifier with low zero current and solid-state chopper for input is described. This amplifier can be used in control circuits and for general applications such as temperature measurement in thermocouples, amplifier for a photo-sensitive element, or zero amplifier in control systems. The input impedance is relatively low, serving principally as current amplifier. It is possible to obtain a symmetry characteristic for positive and negative values of the output voltage with respect to the input. (tr-auth)

  13. Enhanced Response Time of Electrowetting Lenses with Shaped Input Voltage Functions.

    PubMed

    Supekar, Omkar D; Zohrabi, Mo; Gopinath, Juliet T; Bright, Victor M

    2017-05-16

    Adaptive optical lenses based on the electrowetting principle are being rapidly implemented in many applications, such as microscopy, remote sensing, displays, and optical communication. To characterize the response of these electrowetting lenses, the dependence upon direct current (DC) driving voltage functions was investigated in a low-viscosity liquid system. Cylindrical lenses with inner diameters of 2.45 and 3.95 mm were used to characterize the dynamic behavior of the liquids under DC voltage electrowetting actuation. With the increase of the rise time of the input exponential driving voltage, the originally underdamped system response can be damped, enabling a smooth response from the lens. We experimentally determined the optimal rise times for the fastest response from the lenses. We have also performed numerical simulations of the lens actuation with input exponential driving voltage to understand the variation in the dynamics of the liquid-liquid interface with various input rise times. We further enhanced the response time of the devices by shaping the input voltage function with multiple exponential rise times. For the 3.95 mm inner diameter lens, we achieved a response time improvement of 29% when compared to the fastest response obtained using single-exponential driving voltage. The technique shows great promise for applications that require fast response times.

  14. DefEX: Hands-On Cyber Defense Exercise for Undergraduate Students

    DTIC Science & Technology

    2011-07-01

    Injection, and 4) File Upload. Next, the students patched the associated flawed Perl and PHP Hypertext Preprocessor ( PHP ) code. Finally, students...underlying script. The Zora XSS vulnerability existed in a PHP file that echoed unfiltered user input back to the screen. To eliminate the...vulnerability, students filtered the input using the PHP htmlentities function and retested the code. The htmlentities function translates certain ambiguous

  15. Comparison of the Diagnostic Accuracy of DSC- and Dynamic Contrast-Enhanced MRI in the Preoperative Grading of Astrocytomas.

    PubMed

    Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G

    2015-11-01

    Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.

  16. Variable Delay Element For Jitter Control In High Speed Data Links

    DOEpatents

    Livolsi, Robert R.

    2002-06-11

    A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.

  17. Time delay between the SYMH and the solar wind energy input during intense storms determined by response function analysis

    NASA Astrophysics Data System (ADS)

    Cao, X.; Du, A.

    2014-12-01

    We statistically studied the response time of the SYMH to the solar wind energy input ɛ by using the RFA approach. The average response time was 64 minutes. There was no clear trend among these events concerning to the minimum SYMH and storm type. It seems that the response time of magnetosphere to the solar wind energy input is independent on the storm intensity and the solar wind condition. The response function shows one peak even when the solar wind energy input and the SYMH have multi-peak. The response time exhibits as the intrinsic property of the magnetosphere that stands for the typical formation time of the ring current. This may be controlled by magnetospheric temperature, average number density, the oxygen abundance et al.

  18. Quantum design rules for single molecule logic gates.

    PubMed

    Renaud, N; Hliwa, M; Joachim, C

    2011-08-28

    Recent publications have demonstrated how to implement a NOR logic gate with a single molecule using its interaction with two surface atoms as logical inputs [W. Soe et al., ACS Nano, 2011, 5, 1436]. We demonstrate here how this NOR logic gate belongs to the general family of quantum logic gates where the Boolean truth table results from a full control of the quantum trajectory of the electron transfer process through the molecule by very local and classical inputs practiced on the molecule. A new molecule OR gate is proposed for the logical inputs to be also single metal atoms, one per logical input.

  19. A five-primary photostimulator suitable for studying intrinsically photosensitive retinal ganglion cell functions in humans

    PubMed Central

    Cao, Dingcai; Nicandro, Nathaniel; Barrionuevo, Pablo A.

    2015-01-01

    Intrinsically photosensitive retinal ganglion cells (ipRGCs) can respond to light directly through self-contained photopigment, melanopsin. IpRGCs also receive synaptic inputs from rods and cones. Thus, studying ipRGC functions requires a novel photostimulating method that can account for all of the photoreceptor inputs. Here, we introduced an inexpensive LED-based five-primary photostimulator that can control the excitations of rods, S-, M-, L-cones, and melanopsin-containing ipRGCs in humans at constant background photoreceptor excitation levels, a critical requirement for studying the adaptation behavior of ipRGCs with rod, cone, or melanopsin input. We described the theory and technical aspects (including optics, electronics, software, and calibration) of the five-primary photostimulator. Then we presented two preliminary studies using the photostimulator we have implemented to measure melanopsin-mediated pupil responses and temporal contrast sensitivity function (TCSF). The results showed that the S-cone input to pupil responses was antagonistic to the L-, M- or melanopsin inputs, consistent with an S-OFF and (L + M)-ON response property of primate ipRGCs (Dacey et al., 2005). In addition, the melanopsin-mediated TCSF had a distinctive pattern compared with L + M or S-cone mediated TCSF. Other than controlling individual photoreceptor excitation independently, the five-primary photostimulator has the flexibility in presenting stimuli modulating any combination of photoreceptor excitations, which allows researchers to study the mechanisms by which ipRGCs combine various photoreceptor inputs. PMID:25624466

  20. Your Lung Operation: After Your Operation

    MedlinePlus Videos and Cool Tools

    ... Phases of Surgical Care S-CAHPS Measure Application Partnership Pre-Rulemaking Input Surgical Quality Alliance Participate Get Involved ... General Surgery Selected Readings in General Surgery CME Test Login SRGS Online Login Subscribe to SRGS Issues ...

Top