Sample records for process generalized linear

  1. A General Accelerated Degradation Model Based on the Wiener Process.

    PubMed

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  2. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  3. Phylogenetic mixtures and linear invariants for equal input models.

    PubMed

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  4. High growth rate homoepitaxial diamond film deposition at high temperatures by microwave plasma-assisted chemical vapor deposition

    NASA Technical Reports Server (NTRS)

    Vohra, Yogesh K. (Inventor); McCauley, Thomas S. (Inventor)

    1997-01-01

    The deposition of high quality diamond films at high linear growth rates and substrate temperatures for microwave-plasma chemical vapor deposition is disclosed. The linear growth rate achieved for this process is generally greater than 50 .mu.m/hr for high quality films, as compared to rates of less than 5 .mu.m/hr generally reported for MPCVD processes.

  5. Amplitudes for multiphoton quantum processes in linear optics

    NASA Astrophysics Data System (ADS)

    Urías, Jesús

    2011-07-01

    The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.

  6. Noise limitations in optical linear algebra processors.

    PubMed

    Batsell, S G; Jong, T L; Walkup, J F; Krile, T F

    1990-05-10

    A general statistical noise model is presented for optical linear algebra processors. A statistical analysis which includes device noise, the multiplication process, and the addition operation is undertaken. We focus on those processes which are architecturally independent. Finally, experimental results which verify the analytical predictions are also presented.

  7. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  8. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  9. Linear and Nonlinear Thinking: A Multidimensional Model and Measure

    ERIC Educational Resources Information Center

    Groves, Kevin S.; Vance, Charles M.

    2015-01-01

    Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…

  10. First-passage and escape problems in the Feller process

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Perelló, Josep

    2012-10-01

    The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.

  11. Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process

    NASA Astrophysics Data System (ADS)

    Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas

    2018-05-01

    This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.

  12. Linear response theory and transient fluctuation relations for diffusion processes: a backward point of view

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Tong, Huan; Ma, Rui; Ou-Yang, Zhong-can

    2010-12-01

    A formal apparatus is developed to unify derivations of the linear response theory and a variety of transient fluctuation relations for continuous diffusion processes from a backward point of view. The basis is a perturbed Kolmogorov backward equation and the path integral representation of its solution. We find that these exact transient relations could be interpreted as a consequence of a generalized Chapman-Kolmogorov equation, which intrinsically arises from the Markovian characteristic of diffusion processes.

  13. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  14. Optimal linear and nonlinear feature extraction based on the minimization of the increased risk of misclassification. [Bayes theorem - statistical analysis/data processing

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.

    1974-01-01

    General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized.

  15. Typical Werner states satisfying all linear Bell inequalities with dichotomic measurements

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing

    2018-04-01

    Quantum entanglement as a special resource inspires various distinct applications in quantum information processing. Unfortunately, it is NP-hard to detect general quantum entanglement using Bell testing. Our goal is to investigate quantum entanglement with white noises that appear frequently in experiment and quantum simulations. Surprisingly, for almost all multipartite generalized Greenberger-Horne-Zeilinger states there are entangled noisy states that satisfy all linear Bell inequalities consisting of full correlations with dichotomic inputs and outputs of each local observer. This result shows generic undetectability of mixed entangled states in contrast to Gisin's theorem of pure bipartite entangled states in terms of Bell nonlocality. We further provide an accessible method to show a nontrivial set of noisy entanglement with small number of parties satisfying all general linear Bell inequalities. These results imply typical incompleteness of special Bell theory in explaining entanglement.

  16. The Linear Bias in the Zeldovich Approximation and a Relation between the Number Density and the Linear Bias of Dark Halos

    NASA Astrophysics Data System (ADS)

    Fan, Zuhui

    2000-01-01

    The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.

  17. A case of "order insensitivity"? Natural and artificial language processing in a man with primary progressive aphasia.

    PubMed

    Zimmerer, Vitor C; Varley, Rosemary A

    2015-08-01

    Processing of linear word order (linear configuration) is important for virtually all languages and essential to languages such as English which have little functional morphology. Damage to systems underpinning configurational processing may specifically affect word-order reliant sentence structures. We explore order processing in WR, a man with primary progressive aphasia (PPA). In a previous report, we showed how WR showed impaired processing of actives, which rely strongly on word order, but not passives where functional morphology signals thematic roles. Using the artificial grammar learning (AGL) paradigm, we examined WR's ability to process order in non-verbal, visual sequences and compared his profile to that of healthy controls, and aphasic participants with and without severe syntactic disorder. Results suggested that WR, like some other patients with severe syntactic impairment, was unable to detect linear configurational structure. The data are consistent with the notion that disruption of possibly domain-general linearization systems differentially affects processing of active and passive sentence structures. Further research is needed to test this account, and we suggest hypotheses for future studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Endoreversible quantum heat engines in the linear response regime.

    PubMed

    Wang, Honghui; He, Jizhou; Wang, Jianhui

    2017-07-01

    We analyze general models of quantum heat engines operating a cycle of two adiabatic and two isothermal processes. We use the quantum master equation for a system to describe heat transfer current during a thermodynamic process in contact with a heat reservoir, with no use of phenomenological thermal conduction. We apply the endoreversibility description to such engine models working in the linear response regime and derive expressions of the efficiency and the power. By analyzing the entropy production rate along a single cycle, we identify the thermodynamic flux and force that a linear relation connects. From maximizing the power output, we find that such heat engines satisfy the tight-coupling condition and the efficiency at maximum power agrees with the Curzon-Ahlborn efficiency known as the upper bound in the linear response regime.

  19. Derivation of the linear-logistic model and Cox's proportional hazard model from a canonical system description.

    PubMed

    Voit, E O; Knapp, R G

    1997-08-15

    The linear-logistic regression model and Cox's proportional hazard model are widely used in epidemiology. Their successful application leaves no doubt that they are accurate reflections of observed disease processes and their associated risks or incidence rates. In spite of their prominence, it is not a priori evident why these models work. This article presents a derivation of the two models from the framework of canonical modeling. It begins with a general description of the dynamics between risk sources and disease development, formulates this description in the canonical representation of an S-system, and shows how the linear-logistic model and Cox's proportional hazard model follow naturally from this representation. The article interprets the model parameters in terms of epidemiological concepts as well as in terms of general systems theory and explains the assumptions and limitations generally accepted in the application of these epidemiological models.

  20. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  1. An algebraic equation solution process formulated in anticipation of banded linear equations.

    DOT National Transportation Integrated Search

    1971-01-01

    A general method for the solution of large, sparsely banded, positive-definite, coefficient matrices is presented. The goal in developing the method was to produce an efficient and reliable solution process and to provide the user-programmer with a p...

  2. High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters

    DTIC Science & Technology

    2017-04-22

    signatures which can be used for direct, non -invasive, comparison with experimental diagnostics can be produced. This research will be directly... experimental campaign is critical to developing general design philosophies for low-power plasmoid formation, the complexity of non -linear plasma processes...advanced space propulsion. The work consists of numerical method development, physical model development, and systematic studies of the non -linear

  3. On detection of median filtering in digital images

    NASA Astrophysics Data System (ADS)

    Kirchner, Matthias; Fridrich, Jessica

    2010-01-01

    In digital image forensics, it is generally accepted that intentional manipulations of the image content are most critical and hence numerous forensic methods focus on the detection of such 'malicious' post-processing. However, it is also beneficial to know as much as possible about the general processing history of an image, including content-preserving operations, since they can affect the reliability of forensic methods in various ways. In this paper, we present a simple yet effective technique to detect median filtering in digital images-a widely used denoising and smoothing operator. As a great variety of forensic methods relies on some kind of a linearity assumption, a detection of non-linear median filtering is of particular interest. The effectiveness of our method is backed with experimental evidence on a large image database.

  4. Linear-time reconstruction of zero-recombinant Mendelian inheritance on pedigrees without mating loops.

    PubMed

    Liu, Lan; Jiang, Tao

    2007-01-01

    With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.

  5. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    NASA Astrophysics Data System (ADS)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  6. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  7. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  8. Granger-causality maps of diffusion processes.

    PubMed

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.

  9. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  10. Linear analysis of auto-organization in Hebbian neural networks.

    PubMed

    Carlos Letelier, J; Mpodozis, J

    1995-01-01

    The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.

  11. Flexible Learning Itineraries Based on Conceptual Maps

    ERIC Educational Resources Information Center

    Agudelo, Olga Lucía; Salinas, Jesús

    2015-01-01

    The use of learning itineraries based on conceptual maps is studied in order to propose a more flexible instructional design that strengthens the learning process focused on the student, generating non-linear processes, characterising its elements, setting up relationships between them and shaping a general model with specifications for each…

  12. Evaluation: Boundary Identification in the Non-Linear Special Education System.

    ERIC Educational Resources Information Center

    Yacobacci, Patricia M.

    The evaluation process within special education, as in general education, most often becomes one of data collection consisting of formal and informal tests given by the school psychologist and the classroom instructor. Influences of the complex environment on the educational process are often ignored. Evaluation factors include mainstreaming,…

  13. Information processing in dendrites I. Input pattern generalisation.

    PubMed

    Gurney, K N

    2001-10-01

    In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.

  14. The Elementary Operations of Human Vision Are Not Reducible to Template-Matching

    PubMed Central

    Neri, Peter

    2015-01-01

    It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be the targeted outcome of carefully crafted nonlinear circuits, rather than default behaviour exhibited by basic components. PMID:26556758

  15. Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation

    NASA Astrophysics Data System (ADS)

    Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo

    2014-03-01

    Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.

  16. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  17. Digital receiver study and implementation

    NASA Technical Reports Server (NTRS)

    Fogle, D. A.; Lee, G. M.; Massey, J. C.

    1972-01-01

    Computer software was developed which makes it possible to use any general purpose computer with A/D conversion capability as a PSK receiver for low data rate telemetry processing. Carrier tracking, bit synchronization, and matched filter detection are all performed digitally. To aid in the implementation of optimum computer processors, a study of general digital processing techniques was performed which emphasized various techniques for digitizing general analog systems. In particular, the phase-locked loop was extensively analyzed as a typical non-linear communication element. Bayesian estimation techniques for PSK demodulation were studied. A hardware implementation of the digital Costas loop was developed.

  18. Real-time Adaptive Control Using Neural Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  19. Research in Stochastic Processes.

    DTIC Science & Technology

    1983-10-01

    increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound

  20. Structural Loads Analysis for Wave Energy Converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi

    2017-06-03

    This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process.« less

  1. Linear Transformation of Electromagnetic Wave Beams of the Electron-Cyclotron Range in Toroidal Magnetic Configurations

    NASA Astrophysics Data System (ADS)

    Khusainov, T. A.; Shalashov, A. G.; Gospodchikov, E. D.

    2018-05-01

    The field structure of quasi-optical wave beams tunneled through the evanescence region in the vicinity of the plasma cutoff in a nonuniform magnetoactive plasma is analyzed. This problem is traditionally associated with the process of linear transformation of ordinary and extraordinary waves. An approximate analytical solution is constructed for a rather general magnetic configuration applicable to spherical tokamaks, optimized stellarators, and other magnetic confinement systems with a constant plasma density on magnetic surfaces. A general technique for calculating the transformation coefficient of a finite-aperture wave beam is proposed, and the physical conditions required for the most efficient transformation are analyzed.

  2. Integrated circuits for accurate linear analogue electric signal processing

    NASA Astrophysics Data System (ADS)

    Huijsing, J. H.

    1981-11-01

    The main lines in the design of integrated circuits for accurate analog linear electric signal processing in a frequency range including DC are investigated. A categorization of universal active electronic devices is presented on the basis of the connections of one of the terminals of the input and output ports to the common ground potential. The means for quantifying the attributes of four types of universal active electronic devices are included. The design of integrated operational voltage amplifiers (OVA) is discussed. Several important applications in the field of general instrumentation are numerically evaluated, and the design of operatinal floating amplifiers is presented.

  3. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  4. A mathematical study of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.

  5. Tangent linear super-parameterization: attributable, decomposable moist processes for tropical variability studies

    NASA Astrophysics Data System (ADS)

    Mapes, B. E.; Kelly, P.; Song, S.; Hu, I. K.; Kuang, Z.

    2015-12-01

    An economical 10-layer global primitive equation solver is driven by time-independent forcing terms, derived from a training process, to produce a realisting eddying basic state with a tracer q trained to act like water vapor mixing ratio. Within this basic state, linearized anomaly moist physics in the column are applied in the form of a 20x20 matrix. The control matrix was derived from the results of Kuang (2010, 2012) who fitted a linear response function from a cloud resolving model in a state of deep convecting equilibrium. By editing this matrix in physical space and eigenspace, scaling and clipping its action, and optionally adding terms for processes that do not conserve moist statice energy (radiation, surface fluxes), we can decompose and explain the model's diverse moist process coupled variability. Recitified effects of this variability on the general circulation and climate, even in strictly zero-mean centered anomaly physic cases, also are sometimes surprising.

  6. Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities

    NASA Astrophysics Data System (ADS)

    Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred

    2012-07-01

    The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in the frame of an ESA TRP study [1]. A bread-board including typical non-linearities has been designed, manufactured and tested through a typical spacecraft dynamic test campaign. The study has demonstrate the capabilities to perform non-linear dynamic test predictions on a flight representative spacecraft, the good correlation of test results with respect to Finite Elements Model (FEM) prediction and the possibility to identify modal behaviour and to characterize non-linearities characteristics from test results. As a synthesis for this study, overall guidelines have been derived on the mechanical verification process to improve level of expertise on tests involving spacecraft including non-linearity.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi

    This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process. The objective of this study is to verify the generalized body-modes approach in comparison to high-fidelity FSI simulations to accurately predict structural deflections and stress loads in a WEC. Two verification cases are considered, a free-floating barge and a fixed-bottom column. Details for both the generalized body-modes models and FSI models are first provided. Results for each of the models are then compared and discussed. Finally, based on the verification results obtained, future plans for incorporating the generalized body-modes method into the WEC simulation tool, WEC-Sim, and the overall WEC design process are discussed.« less

  8. A formulation of rotor-airframe coupling for design analysis of vibrations of helicopter airframes

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.; Walton, W. C., Jr.

    1982-01-01

    A linear formulation of rotor airframe coupling intended for vibration analysis in airframe structural design is presented. The airframe is represented by a finite element analysis model; the rotor is represented by a general set of linear differential equations with periodic coefficients; and the connections between the rotor and airframe are specified through general linear equations of constraint. Coupling equations are applied to the rotor and airframe equations to produce one set of linear differential equations governing vibrations of the combined rotor airframe system. These equations are solved by the harmonic balance method for the system steady state vibrations. A feature of the solution process is the representation of the airframe in terms of forced responses calculated at the rotor harmonics of interest. A method based on matrix partitioning is worked out for quick recalculations of vibrations in design studies when only relatively few airframe members are varied. All relations are presented in forms suitable for direct computer implementation.

  9. Roles Played by Electrostatic Waves in Producing Radio Emissions

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.

    2000-01-01

    Processes in which electromagnetic radiation is produced directly or indirectly via intermediate waves are reviewed. It is shown that strict theoretical constraints exist for electrons to produce nonthermal levels of radiation directly by the Cerenkov or cyclotron resonances. In contrast, indirect emission processes in which intermediary plasma waves are converted into radiation are often favored on general and specific grounds. Four classes of mechanisms involving the conversion of electrostatic waves into radiation are linear mode conversion, hybrid linear/nonlinear mechanisms, nonlinear wave-wave and wave-particle processes, and radiation from localized wave packets. These processes are reviewed theoretically and observational evidence summarized for their occurrence. Strong evidence exists that specific nonlinear wave processes and mode conversion can explain quantitatively phenomena involving type III solar radio bursts and ionospheric emissions. On the other hand, no convincing evidence exists that magnetospheric continuum radiation is produced by mode conversion instead of nonlinear wave processes. Further research on these processes is needed.

  10. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).

  11. Analyzing linear spatial features in ecology.

    PubMed

    Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W

    2018-06-01

    The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.

  12. Generalized ISAR--part I: an optimal method for imaging large naval vessels.

    PubMed

    Given, James A; Schmidt, William R

    2005-11-01

    We describe a generalized inverse synthetic aperture radar (ISAR) process that performs well under a wide variety of conditions common to the naval ISAR tests of large vessels. In particular, the generalized ISAR process performs well in the presence of moderate intensity ship roll. The process maps localized scatterers onto peaks on the ISAR plot. However, in a generalized ISAR plot, each of the two coordinates of a peak is a fixed linear combination of the three ship coordinates of the scatterer causing the peak. Combining this process with interferometry will then provide high-accuracy three-dimensional location of the important scatterers on a ship. We show that ISAR can be performed in the presence of simultaneous roll and aspect change, provided the two Doppler rates are not too close in magnitude. We derive the equations needed for generalized ISAR, both roll driven and aspect driven, and test them against simulations performed in a variety of conditions, including large roll amplitudes.

  13. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  14. Permanent-magnet linear alternators. I - Fundamental equations. II - Design guidelines

    NASA Astrophysics Data System (ADS)

    Boldea, I.; Nasar, S. A.

    1987-01-01

    The general equations of permanent-magnet heteropolar three-phase and single-phase linear alternators, powered by free-piston Stirling engines, are presented, with application to space power stations and domestic applications including solar power plants. The equations are applied to no-load and short-circuit conditions, illustrating the end-effect caused by the speed-reversal process. In the second part, basic design guidelines for a three-phase tubular linear alternator are given, and the procedure is demonstrated with the numerical example of the design of a 25-kVA, 14.4-m/s, 120/220-V, 60-Hz alternator.

  15. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  16. QMR: A Quasi-Minimal Residual method for non-Hermitian linear systems

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noel M.

    1990-01-01

    The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. A novel BCG like approach is presented called the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported.

  17. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  18. Universal ideal behavior and macroscopic work relation of linear irreversible stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Ma, Yi-An; Qian, Hong

    2015-06-01

    We revisit the Ornstein-Uhlenbeck (OU) process as the fundamental mathematical description of linear irreversible phenomena, with fluctuations, near an equilibrium. By identifying the underlying circulating dynamics in a stationary process as the natural generalization of classical conservative mechanics, a bridge between a family of OU processes with equilibrium fluctuations and thermodynamics is established through the celebrated Helmholtz theorem. The Helmholtz theorem provides an emergent macroscopic ‘equation of state’ of the entire system, which exhibits a universal ideal thermodynamic behavior. Fluctuating macroscopic quantities are studied from the stochastic thermodynamic point of view and a non-equilibrium work relation is obtained in the macroscopic picture, which may facilitate experimental study and application of the equalities due to Jarzynski, Crooks, and Hatano and Sasa.

  19. Toward intelligent information sysytem

    NASA Astrophysics Data System (ADS)

    Onodera, Natsuo

    "Hypertext" means a concept of a novel computer-assisted tool for storage and retrieval of text information based on human association. Structure of knowledge in our idea processing is generally complicated and networked, but traditional paper documents merely express it in essentially linear and sequential forms. However, recent advances in work-station technology have allowed us to process easily electronic documents containing non-linear structure such as references or hierarchies. This paper describes concept, history and basic organization of hypertext, and shows the outline and features of existing main hypertext systems. Particularly, use of the hypertext database is illustrated by an example of Intermedia developed by Brown University.

  20. Time reversibility of intracranial human EEG recordings in mesial temporal lobe epilepsy

    NASA Astrophysics Data System (ADS)

    van der Heyden, M. J.; Diks, C.; Pijn, J. P. M.; Velis, D. N.

    1996-02-01

    Intracranial electroencephalograms from patients suffering from mesial temporal lobe epilepsy were tested for time reversibility. If the recorded time series is irreversible, the input of the recording system cannot be a realisation of a linear Gaussian random process. We confirmed experimentally that the measurement equipment did not introduce irreversibility in the recorded output when the input was a realisation of a linear Gaussian random process. In general, the non-seizure recordings are reversible, whereas the seizure recordings are irreversible. These results suggest that time reversibility is a useful property for the characterisation of human intracranial EEG recordings in mesial temporal lobe epilepsy.

  1. Nonlinear magnetoacoustic wave propagation with chemical reactions

    NASA Astrophysics Data System (ADS)

    Margulies, Timothy Scott

    2002-11-01

    The magnetoacoustic problem with an application to sound wave propagation through electrically conducting fluids such as the ocean in the Earth's magnetic field, liquid metals, or plasmas has been addressed taking into account several simultaneous chemical reactions. Using continuum balance equations for the total mass, linear momentum, energy; as well as Maxwell's electrodynamic equations, a nonlinear beam equation has been developed to generalize the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a fluid with linear viscosity but nonlinear and diffraction effects. Thermodynamic parameters are used and not tailored to only an adiabatic fluid case. The chemical kinetic equations build on a relaxing media approach presented, for example, by K. Naugolnukh and L. Ostrovsky [Nonlinear Wave Processes in Acoustics (Cambridge Univ. Press, Cambridge, 1998)] for a linearized single reaction and thermodynamic pressure equation of state. Approximations for large and small relaxation times and for magnetohydrodynamic parameters [Korsunskii, Sov. Phys. Acoust. 36 (1990)] are examined. Additionally, Cattaneo's equation for heat conduction and its generalization for a memory process rather than a Fourier's law are taken into account. It was introduced for the heat flux depends on the temperature gradient at an earlier time to generate heat pulses of finite speed.

  2. Noncoherent parallel optical processor for discrete two-dimensional linear transformations.

    PubMed

    Glaser, I

    1980-10-01

    We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.

  3. An Index and Test of Linear Moderated Mediation.

    PubMed

    Hayes, Andrew F

    2015-01-01

    I describe a test of linear moderated mediation in path analysis based on an interval estimate of the parameter of a function linking the indirect effect to values of a moderator-a parameter that I call the index of moderated mediation. This test can be used for models that integrate moderation and mediation in which the relationship between the indirect effect and the moderator is estimated as linear, including many of the models described by Edwards and Lambert ( 2007 ) and Preacher, Rucker, and Hayes ( 2007 ) as well as extensions of these models to processes involving multiple mediators operating in parallel or in serial. Generalization of the method to latent variable models is straightforward. Three empirical examples describe the computation of the index and the test, and its implementation is illustrated using Mplus and the PROCESS macro for SPSS and SAS.

  4. Regional variations in the observed morphology and activity of martian linear gullies

    NASA Astrophysics Data System (ADS)

    Morales, Kimberly Marie; Diniega, Serina; Austria, Mia; Ochoa, Vincent; HiRISE Science and Instrument Team

    2017-10-01

    The formation mechanism for martian linear gullies has been much debated, because they have been suggested as possible evidence of liquid water on Mars. This class of dune gullies is defined by long (up to 2 km), narrow channels that are relatively uniform in width, and range in sinuosity index. Unlike other gullies on Earth and Mars that end in depositional aprons, linear gullies end in circular depressions referred to as terminal pits. This particular morphological difference, along with the difficulty of identifying a source of water to form these features, has led to several ‘dry’ hypotheses. Recent observations on the morphology, distribution, and present-day activity of linear gullies suggests that they could be formed by subliming blocks of seasonal CO2 ice (“dry ice”) sliding downslope on dune faces. In our study, we aimed to further constrain the possible mechanism(s) responsible for the formation of linear gullies by using HiRISE images to collect morphological data and track seasonal activity within three regions in the southern hemisphere-Hellespontus (~45°S, 40°E), Aonia Terra (~50°S, 290°E), and Jeans (~70°S, 155°E) over the last four Mars years. General similarities in these observations were reflective of the proposed formation process (sliding CO2 blocks) while differences were correlated with regional environmental conditions related to the latitude or general geologic setting. This presentation describes the observed regional differences in linear gully morphology and activity, and investigates how environmental factors such as surface properties and local levels of frost may explain these variations while still supporting the proposed model. Determining the formation mechanism that forms these martian features can improve our understanding of both the climatic and geological processes that shape the Martian surface.

  5. Quantifying the evolution of flow boiling bubbles by statistical testing and image analysis: toward a general model.

    PubMed

    Xiao, Qingtai; Xu, Jianxin; Wang, Hua

    2016-08-16

    A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.

  6. Dual-systems and the development of reasoning: competence-procedural systems.

    PubMed

    Overton, Willis F; Ricco, Robert B

    2011-03-01

    Dual-system, dual-process, accounts of adult cognitive processing are examined in the context of a self-organizing relational developmental systems approaches to cognitive growth. Contemporary adult dual-process accounts describe a linear architecture of mind entailing two split-off, but interacting systems; a domain general, content-free 'analytic' system (system 2) and a domain specific highly contextualized 'heuristic' system (system 1). In the developmental literature on deductive reasoning, a similar distinction has been made between a domain general competence (reflective, algorithmic) system and a domain specific procedural system. In contrast to the linear accounts offered by empiricist, nativist, and/or evolutionary explanations, the dual competence-procedural developmental perspective argues that the mature systems emerge through developmental transformations as differentiations and intercoordinations of an early relatively undifferentiated action matrix. This development, whose microscopic mechanism is action-in-the-world, is characterized as being embodied, nonlinear, and epigenetic. WIREs Cogni Sci 2011 2 231-237 DOI: 10.1002/wcs.120 For further resources related to this article, please visit the WIREs website. © 2010 John Wiley & Sons, Ltd.

  7. Quantifying the evolution of flow boiling bubbles by statistical testing and image analysis: toward a general model

    PubMed Central

    Xiao, Qingtai; Xu, Jianxin; Wang, Hua

    2016-01-01

    A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065

  8. 20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...

  9. 20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...

  10. 20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...

  11. 20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...

  12. 20 CFR 416.924b - Age as a factor of evaluation in the sequential evaluation process for children.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... infants. We generally use chronological age (that is, a child's age based on birth date) when we decide... chronological age. When we evaluate the development or linear growth of a child born prematurely, we may use a... sequential evaluation process for children. 416.924b Section 416.924b Employees' Benefits SOCIAL SECURITY...

  13. Some aspects of mathematical and chemical modeling of complex chemical processes

    NASA Technical Reports Server (NTRS)

    Nemes, I.; Botar, L.; Danoczy, E.; Vidoczy, T.; Gal, D.

    1983-01-01

    Some theoretical questions involved in the mathematical modeling of the kinetics of complex chemical process are discussed. The analysis is carried out for the homogeneous oxidation of ethylbenzene in the liquid phase. Particular attention is given to the determination of the general characteristics of chemical systems from an analysis of mathematical models developed on the basis of linear algebra.

  14. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  15. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  16. Statistical Methods for Quality Control of Steel Coils Manufacturing Process using Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    García-Díaz, J. Carlos

    2009-11-01

    Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.

  17. Explicit criteria for prioritization of cataract surgery

    PubMed Central

    Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia

    2006-01-01

    Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893

  18. Fuel cell plates with improved arrangement of process channels for enhanced pressure drop across the plates

    DOEpatents

    Spurrier, Francis R.; Pierce, Bill L.; Wright, Maynard K.

    1986-01-01

    A plate for a fuel cell has an arrangement of ribs defining an improved configuration of process gas channels and slots on a surface of the plate which provide a modified serpentine gas flow pattern across the plate surface. The channels are generally linear and arranged parallel to one another while the spaced slots allow cross channel flow of process gas in a staggered fashion which creates a plurality of generally mini-serpentine flow paths extending transverse to the longitudinal gas flow along the channels. Adjacent pairs of the channels are interconnected to one another in flow communication. Also, a bipolar plate has the aforementioned process gas channel configuration on one surface and another configuration on the opposite surface. In the other configuration, there are not slots and the gas flow channels have a generally serpentine configuration.

  19. A mathematical theory of learning control for linear discrete multivariable systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Longman, Richard W.

    1988-01-01

    When tracking control systems are used in repetitive operations such as robots in various manufacturing processes, the controller will make the same errors repeatedly. Here consideration is given to learning controllers that look at the tracking errors in each repetition of the process and adjust the control to decrease these errors in the next repetition. A general formalism is developed for learning control of discrete-time (time-varying or time-invariant) linear multivariable systems. Methods of specifying a desired trajectory (such that the trajectory can actually be performed by the discrete system) are discussed, and learning controllers are developed. Stability criteria are obtained which are relatively easy to use to insure convergence of the learning process, and proper gain settings are discussed in light of measurement noise and system uncertainties.

  20. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing

    PubMed Central

    Yang, Changju; Kim, Hyongsuk

    2016-01-01

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186

  1. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.

    PubMed

    Yang, Changju; Kim, Hyongsuk

    2016-08-19

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.

  2. Simplified model to describe the dissociative recombination of linear polyatomic ions of astrophysical interest

    NASA Astrophysics Data System (ADS)

    Fonseca Dos Santos, Samantha; Douguet, Nicolas; Kokoouline, Viatcheslav; Orel, Ann

    2013-05-01

    We will present theoretical results on the dissociative recombination (DR) of the linear polyatomic ions HCNH+, HCO+ and N2H+. Besides their astrophysical importance, they also share the characteristic that at low electronic impact energies their DR process happens via the indirect DR mechanism. We apply a general simplified model successfully implemented to treat the DR process of the highly symmetric non-linear molecules H3+, CH3+, H3O+ and NH4+ to calculated cross sections and DR rates for these ions. The model is based on multichannel quantum defect theory and accounts for all the main ingredients of indirect DR. New perspectives on dissociative recombination of HCO+ will also be discussed, including the possible role of HOC+ in storage ring experimental results. This work is supported by the DOE Office of Basic Energy Science and the National Science Foundation, Grant No's PHY-11-60611 and PHY-10-68785.

  3. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    DTIC Science & Technology

    2018-03-01

    offset designs . Particularly, the proposed CA-CFO is compared with uniform linear array and uniform frequency offset (ULA-UFO). Uniform linear array...and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing (Grant No. N00014-13-1-0061) Submitted to...Contents 1. Executive Summary …………………………………………………………………………. 1 1.1. Generalized Co-Prime Array Design ………………………………………………… 1 1.2. Wideband

  4. Optimal Estimation of Clock Values and Trends from Finite Data

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    2005-01-01

    We show how to solve two problems of optimal linear estimation from a finite set of phase data. Clock noise is modeled as a stochastic process with stationary dth increments. The covariance properties of such a process are contained in the generalized autocovariance function (GACV). We set up two principles for optimal estimation: with the help of the GACV, these principles lead to a set of linear equations for the regression coefficients and some auxiliary parameters. The mean square errors of the estimators are easily calculated. The method can be used to check the results of other methods and to find good suboptimal estimators based on a small subset of the available data.

  5. Estimation for general birth-death processes

    PubMed Central

    Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.

    2013-01-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261

  6. Estimation for general birth-death processes.

    PubMed

    Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2014-04-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.

  7. Process Setting through General Linear Model and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Senjuntichai, Angsumalin

    2010-10-01

    The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.

  8. Beyond the spectral theorem: Spectrally decomposing arbitrary functions of nondiagonalizable operators

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-06-01

    Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, the familiar linear operator techniques that one would then hope to use often fail since the operators cannot be diagonalized. The curse of nondiagonalizability also plays an important role even in finite-dimensional linear operators, leading to analytical impediments that occur across many scientific domains. We show how to circumvent it via two tracks. First, using the well-known holomorphic functional calculus, we develop new practical results about spectral projection operators and the relationship between left and right generalized eigenvectors. Second, we generalize the holomorphic calculus to a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. This simultaneously simplifies and generalizes functional calculus so that it is readily applicable to analyzing complex physical systems. Together, these results extend the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics arise, including memoryful stochastic processes, open nonunitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator, highlighting the special role of the zero eigenvalue. Furthermore, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a new general method to construct it. We provide new formulae for constructing spectral projection operators and delineate the relations among projection operators, eigenvectors, and left and right generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples. First, we analyze stochastic transition operators in discrete and continuous time. Second, we show that nondiagonalizability can be a robust feature of a stochastic process, induced even by simple counting. As a result, we directly derive distributions of the time-dependent Poisson process and point out that nondiagonalizability is intrinsic to it and the broad class of hidden semi-Markov processes. Third, we show that the Drazin inverse arises naturally in stochastic thermodynamics and that applying the meromorphic functional calculus provides closed-form solutions for the dynamics of key thermodynamic observables. Finally, we draw connections to the Ruelle-Frobenius-Perron and Koopman operators for chaotic dynamical systems and propose how to extract eigenvalues from a time-series.

  9. Optimal generalized multistep integration formulae for real-time digital simulation

    NASA Technical Reports Server (NTRS)

    Moerder, D. D.; Halyo, N.

    1985-01-01

    The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.

  10. On the linear relation between the mean and the standard deviation of a response time distribution.

    PubMed

    Wagenmakers, Eric-Jan; Brown, Scott

    2007-07-01

    Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.

  11. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  12. Quantum description of light propagation in generalized media

    NASA Astrophysics Data System (ADS)

    Häyrynen, Teppo; Oksanen, Jani

    2016-02-01

    Linear quantum input-output relation based models are widely applied to describe the light propagation in a lossy medium. The details of the interaction and the associated added noise depend on whether the device is configured to operate as an amplifier or an attenuator. Using the traveling wave (TW) approach, we generalize the linear material model to simultaneously account for both the emission and absorption processes and to have point-wise defined noise field statistics and intensity dependent interaction strengths. Thus, our approach describes the quantum input-output relations of linear media with net attenuation, amplification or transparency without pre-selection of the operation point. The TW approach is then applied to investigate materials at thermal equilibrium, inverted materials, the transparency limit where losses are compensated, and the saturating amplifiers. We also apply the approach to investigate media in nonuniform states which can be e.g. consequences of a temperature gradient over the medium or a position dependent inversion of the amplifier. Furthermore, by using the generalized model we investigate devices with intensity dependent interactions and show how an initial thermal field transforms to a field having coherent statistics due to gain saturation.

  13. Threat Appeals: The Fear-Persuasion Relationship is Linear and Curvilinear.

    PubMed

    Dillard, James Price; Li, Ruobing; Huang, Yan

    2017-11-01

    Drive theory may be seen as the first scientific theory of health and risk communication. However, its prediction of a curvilinear association between fear and persuasion is generally held to be incorrect. A close rereading of Hovland et al. reveals that within- and between-persons processes were conflated. Using a message that advocated obtaining a screening for colonoscopy, this study (N = 259) tested both forms of the inverted-U hypothesis. In the between-persons data, analyses revealed a linear effect that was consistent with earlier investigations. However, the data showed an inverted-U relationship in within-persons data. Hence, the relationship between fear and persuasion is linear or curvilinear depending on the level of analysis.

  14. The analysis of decimation and interpolation in the linear canonical transform domain.

    PubMed

    Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li

    2016-01-01

    Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.

  15. [Analysis of variance of repeated data measured by water maze with SPSS].

    PubMed

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P

  16. Exact Solutions of Linear Reaction-Diffusion Processes on a Uniformly Growing Domain: Criteria for Successful Colonization

    PubMed Central

    Simpson, Matthew J

    2015-01-01

    Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction—diffusion process on 0

  17. Exact solutions of linear reaction-diffusion processes on a uniformly growing domain: criteria for successful colonization.

    PubMed

    Simpson, Matthew J

    2015-01-01

    Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction-diffusion process on 0

  18. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  19. LEDGF/p75 Deficiency Increases Deletions at the HIV-1 cDNA Ends.

    PubMed

    Bueno, Murilo T D; Reyes, Daniel; Llano, Manuel

    2017-09-15

    Processing of unintegrated linear HIV-1 cDNA by the host DNA repair system results in its degradation and/or circularization. As a consequence, deficient viral cDNA integration generally leads to an increase in the levels of HIV-1 cDNA circles containing one or two long terminal repeats (LTRs). Intriguingly, impaired HIV-1 integration in LEDGF/p75-deficient cells does not result in a correspondent increase in viral cDNA circles. We postulate that increased degradation of unintegrated linear viral cDNA in cells lacking the lens epithelium-derived growth factor (LEDGF/p75) account for this inconsistency. To evaluate this hypothesis, we characterized the nucleotide sequence spanning 2-LTR junctions isolated from LEDGF/p75-deficient and control cells. LEDGF/p75 deficiency resulted in a significant increase in the frequency of 2-LTRs harboring large deletions. Of note, these deletions were dependent on the 3' processing activity of integrase and were not originated by aberrant reverse transcription. Our findings suggest a novel role of LEDGF/p75 in protecting the unintegrated 3' processed linear HIV-1 cDNA from exonucleolytic degradation.

  20. Stochastic search, optimization and regression with energy applications

    NASA Astrophysics Data System (ADS)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  1. EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter

    PubMed Central

    Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.

    2012-01-01

    A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018

  2. Solvent resistant thermoplastic aromatic poly(imidesulfone) and process for preparing same

    NASA Technical Reports Server (NTRS)

    St.clair, T. L.; Yamaki, D. A. (Inventor)

    1983-01-01

    A process for preparing a thermoplastic poly(imidesulfone) is disclosed. This resulting material has thermoplastic properties which are generally associated with polysulfones but not polyimides, and solvent resistance which is generally associated with polyimides but not polysulfones. This system is processable in the 250 to 350 C range for molding, adhesive and laminating applications. This unique thermoplastic poly(imidesulfone) is obtained by incorporating an aromatic sulfone moiety into the backbone of an aromatic linear polyimide by dissolving a quantity of a 3,3',4,4'-benzophenonetetracarboxylic dianhydride (BTDA) in a solution of 3,3'-diaminodiphenylsulfone and bis(2-methoxyethyl)ether, precipitating the reactant product in water, filtering and drying the recovered poly(amide-acid sulfone) and converting it to the poly(imidesulfone) by heating.

  3. Process for preparing solvent resistant, thermoplastic aromatic poly(imidesulfone)

    NASA Technical Reports Server (NTRS)

    St.clair, T. L.; Yamaki, D. A. (Inventor)

    1984-01-01

    A process for preparing a thermoplastic poly(midesulfone) is disclosed. This resulting material has thermoplastic properties which are generally associated with polysulfones but not polyimides, and solvent resistant which is generally associated with polyimides but not polysulfones. This system is processable in the 250 to 350 C range for molding, adhesive and laminating applications. This unique thermoplastic poly(imidesulfone) is obtained by incorporating an aromatic sulfone moiety into the backbone of an aromatic linear polyimide by dissolving a quantity of a 3,3',4,4'-benzophenonetetracarboxylic dianhydride (BTDA) in a solution of 3,3'-diaminodiphenylsulfone and bis(2-methoxyethyl)ether, precipitating the reactant product in water, filtering and drying the recovered poly(amide-acid sulfone) and converting it to the poly(imidesulfone) by heating.

  4. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.

    PubMed

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-11-07

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  5. The neural mechanisms of word order processing revisited: electrophysiological evidence from Japanese.

    PubMed

    Wolff, Susann; Schlesewsky, Matthias; Hirotani, Masako; Bornkessel-Schlesewsky, Ina

    2008-11-01

    We present two ERP studies on the processing of word order variations in Japanese, a language that is suited to shedding further light on the implications of word order freedom for neurocognitive approaches to sentence comprehension. Experiment 1 used auditory presentation and revealed that initial accusative objects elicit increased processing costs in comparison to initial subjects (in the form of a transient negativity) only when followed by a prosodic boundary. A similar effect was observed using visual presentation in Experiment 2, however only for accusative but not for dative objects. These results support a relational account of word order processing, in which the costs of comprehending an object-initial word order are determined by the linearization properties of the initial object in relation to the linearization properties of possible upcoming arguments. In the absence of a prosodic boundary, the possibility for subject omission in Japanese renders it likely that the initial accusative is the only argument in the clause. Hence, no upcoming arguments are expected and no linearization problem can arise. A prosodic boundary or visual segmentation, by contrast, indicate an object-before-subject word order, thereby leading to a mismatch between argument "prominence" (e.g. in terms of thematic roles) and linear order. This mismatch is alleviated when the initial object is highly prominent itself (e.g. in the case of a dative, which can bear the higher-ranking thematic role in a two argument relation). We argue that the processing mechanism at work here can be distinguished from more general aspects of "dependency processing" in object-initial sentences.

  6. Generalization and capacity of extensively large two-layered perceptrons.

    PubMed

    Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido

    2002-09-01

    The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the overlap between two networks in the combined space of Boolean functions and hidden-to-output couplings is introduced. The maximal capacity of such networks is found to scale linearly with the logarithm of the number of Boolean functions per hidden unit. The generalization process exhibits a first-order phase transition from poor to perfect learning for the case of discrete hidden-to-output couplings. The critical number of examples per input dimension, alpha(c), at which the transition occurs, again scales linearly with the logarithm of the number of Boolean functions. In the case of continuous hidden-to-output couplings, the generalization error decreases according to the same power law as for the perceptron, with the prefactor being different.

  7. The Islamic State Battle Plan: Press Release Natural Language Processing

    DTIC Science & Technology

    2016-06-01

    Processing, text mining , corpus, generalized linear model, cascade, R Shiny, leaflet, data visualization 15. NUMBER OF PAGES 83 16. PRICE CODE...Terrorism and Responses to Terrorism TDM Term Document Matrix TF Term Frequency TF-IDF Term Frequency-Inverse Document Frequency tm text mining (R...package=leaflet. Feinerer I, Hornik K (2015) Text Mining Package “tm,” Version 0.6-2. (Jul 3) https://cran.r-project.org/web/packages/tm/tm.pdf

  8. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    PubMed

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  9. Quadratic correlation filters for optical correlators

    NASA Astrophysics Data System (ADS)

    Mahalanobis, Abhijit; Muise, Robert R.; Vijaya Kumar, Bhagavatula V. K.

    2003-08-01

    Linear correlation filters have been implemented in optical correlators and successfully used for a variety of applications. The output of an optical correlator is usually sensed using a square law device (such as a CCD array) which forces the output to be the squared magnitude of the desired correlation. It is however not a traditional practice to factor the effect of the square-law detector in the design of the linear correlation filters. In fact, the input-output relationship of an optical correlator is more accurately modeled as a quadratic operation than a linear operation. Quadratic correlation filters (QCFs) operate directly on the image data without the need for feature extraction or segmentation. In this sense, the QCFs retain the main advantages of conventional linear correlation filters while offering significant improvements in other respects. Not only is more processing required to detect peaks in the outputs of multiple linear filters, but choosing a winner among them is an error prone task. In contrast, all channels in a QCF work together to optimize the same performance metric and produce a combined output that leads to considerable simplification of the post-processing. In this paper, we propose a novel approach to the design of quadratic correlation based on the Fukunaga Koontz transform. Although quadratic filters are known to be optimum when the data is Gaussian, it is expected that they will perform as well as or better than linear filters in general. Preliminary performance results are provided that show that quadratic correlation filters perform better than their linear counterparts.

  10. Retrieval of all effective susceptibilities in nonlinear metamaterials

    NASA Astrophysics Data System (ADS)

    Larouche, Stéphane; Radisic, Vesna

    2018-04-01

    Electromagnetic metamaterials offer a great avenue to engineer and amplify the nonlinear response of materials. Their electric, magnetic, and magnetoelectric linear and nonlinear response are related to their structure, providing unprecedented liberty to control those properties. Both the linear and the nonlinear properties of metamaterials are typically anisotropic. While the methods to retrieve the effective linear properties are well established, existing nonlinear retrieval methods have serious limitations. In this work, we generalize a nonlinear transfer matrix approach to account for all nonlinear susceptibility terms and show how to use this approach to retrieve all effective nonlinear susceptibilities of metamaterial elements. The approach is demonstrated using sum frequency generation, but can be applied to other second-order or higher-order processes.

  11. Fitting a Point Cloud to a 3d Polyhedral Surface

    NASA Astrophysics Data System (ADS)

    Popov, E. V.; Rotkov, S. I.

    2017-05-01

    The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.

  12. Identification of Linear and Nonlinear Sensory Processing Circuits from Spiking Neuron Data.

    PubMed

    Florescu, Dorian; Coca, Daniel

    2018-03-01

    Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.

  13. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  14. Emergence of a fluctuation relation for heat in nonequilibrium Landauer processes

    NASA Astrophysics Data System (ADS)

    Taranto, Philip; Modi, Kavan; Pollock, Felix A.

    2018-05-01

    In a generalized framework for the Landauer erasure protocol, we study bounds on the heat dissipated in typical nonequilibrium quantum processes. In contrast to thermodynamic processes, quantum fluctuations are not suppressed in the nonequilibrium regime and cannot be ignored, making such processes difficult to understand and treat. Here we derive an emergent fluctuation relation that virtually guarantees the average heat produced to be dissipated into the reservoir either when the system or reservoir is large (or both) or when the temperature is high. The implication of our result is that for nonequilibrium processes, heat fluctuations away from its average value are suppressed independently of the underlying dynamics exponentially quickly in the dimension of the larger subsystem and linearly in the inverse temperature. We achieve these results by generalizing a concentration of measure relation for subsystem states to the case where the global state is mixed.

  15. Baldovin-Stella stochastic volatility process and Wiener process mixtures

    NASA Astrophysics Data System (ADS)

    Peirano, P. P.; Challet, D.

    2012-08-01

    Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.

  16. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    NASA Astrophysics Data System (ADS)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  17. Field dynamics inference via spectral density estimation

    NASA Astrophysics Data System (ADS)

    Frank, Philipp; Steininger, Theo; Enßlin, Torsten A.

    2017-11-01

    Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.

  18. Field dynamics inference via spectral density estimation.

    PubMed

    Frank, Philipp; Steininger, Theo; Enßlin, Torsten A

    2017-11-01

    Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.

  19. Evaluating a Policing Strategy Intended to Disrupt an Illicit Street-Level Drug Market

    ERIC Educational Resources Information Center

    Corsaro, Nicholas; Brunson, Rod K.; McGarrell, Edmund F.

    2010-01-01

    The authors examined a strategic policing initiative that was implemented in a high crime Nashville, Tennessee neighborhood by utilizing a mixed-methodological evaluation approach in order to provide (a) a descriptive process assessment of program fidelity; (b) an interrupted time-series analysis relying upon generalized linear models; (c)…

  20. Modeling of active transmembrane transport in a mixture theory framework.

    PubMed

    Ateshian, Gerard A; Morrison, Barclay; Hung, Clark T

    2010-05-01

    This study formulates governing equations for active transport across semi-permeable membranes within the framework of the theory of mixtures. In mixture theory, which models the interactions of any number of fluid and solid constituents, a supply term appears in the conservation of linear momentum to describe momentum exchanges among the constituents. In past applications, this momentum supply was used to model frictional interactions only, thereby describing passive transport processes. In this study, it is shown that active transport processes, which impart momentum to solutes or solvent, may also be incorporated in this term. By projecting the equation of conservation of linear momentum along the normal to the membrane, a jump condition is formulated for the mechano-electrochemical potential of fluid constituents which is generally applicable to nonequilibrium processes involving active transport. The resulting relations are simple and easy to use, and address an important need in the membrane transport literature.

  1. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  2. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  3. Linear-time general decoding algorithm for the surface code

    NASA Astrophysics Data System (ADS)

    Darmawan, Andrew S.; Poulin, David

    2018-05-01

    A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.

  4. Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.

    ERIC Educational Resources Information Center

    Brant, Rollin

    Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…

  5. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    PubMed Central

    Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang

    2009-01-01

    The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569

  6. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging

    PubMed Central

    He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data. PMID:29112151

  7. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  8. Ergodicity-breaking bifurcations and tunneling in hyperbolic transport models

    NASA Astrophysics Data System (ADS)

    Giona, M.; Brasiello, A.; Crescitelli, S.

    2015-11-01

    One of the main differences between parabolic transport, associated with Langevin equations driven by Wiener processes, and hyperbolic models related to generalized Kac equations driven by Poisson processes, is the occurrence in the latter of multiple stable invariant densities (Frobenius multiplicity) in certain regions of the parameter space. This phenomenon is associated with the occurrence in linear hyperbolic balance equations of a typical bifurcation, referred to as the ergodicity-breaking bifurcation, the properties of which are thoroughly analyzed.

  9. Anomaly General Circulation Models.

    NASA Astrophysics Data System (ADS)

    Navarra, Antonio

    The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the dominant response. The most sensitive areas are identified; they correspond to north Japan, the Pole and Greenland regions. A limited set of higher resolution (R15) experiments indicate that this situation is still present and enhanced at higher resolution. The linear anomaly model is also applied to a realistic case. (Abstract shortened with permission of author.).

  10. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  11. Automating approximate Bayesian computation by local linear regression.

    PubMed

    Thornton, Kevin R

    2009-07-07

    In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.

  12. Permitted and forbidden sets in symmetric threshold-linear networks.

    PubMed

    Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques

    2003-03-01

    The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.

  13. Development of orientation tuning in simple cells of primary visual cortex

    PubMed Central

    Moore, Bartlett D.

    2012-01-01

    Orientation selectivity and its development are basic features of visual cortex. The original model of orientation selectivity proposes that elongated simple cell receptive fields are constructed from convergent input of an array of lateral geniculate nucleus neurons. However, orientation selectivity of simple cells in the visual cortex is generally greater than the linear contributions based on projections from spatial receptive field profiles. This implies that additional selectivity may arise from intracortical mechanisms. The hierarchical processing idea implies mainly linear connections, whereas cortical contributions are generally considered to be nonlinear. We have explored development of orientation selectivity in visual cortex with a focus on linear and nonlinear factors in a population of anesthetized 4-wk postnatal kittens and adult cats. Linear contributions are estimated from receptive field maps by which orientation tuning curves are generated and bandwidth is quantified. Nonlinear components are estimated as the magnitude of the power function relationship between responses measured from drifting sinusoidal gratings and those predicted from the spatial receptive field. Measured bandwidths for kittens are slightly larger than those in adults, whereas predicted bandwidths are substantially broader. These results suggest that relatively strong nonlinearities in early postnatal stages are substantially involved in the development of orientation tuning in visual cortex. PMID:22323631

  14. Cumulative Repetition Effects across Multiple Readings of a Word: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Kamienkowski, Juan E.; Carbajal, M. Julia; Bianchi, Bruno; Sigman, Mariano; Shalom, Diego E.

    2018-01-01

    When a word is read more than once, reading time generally decreases in the successive occurrences. This Repetition Effect has been used to study word encoding and memory processes in a variety of experimental measures. We studied naturally occurring repetitions of words within normal texts (stories of around 3,000 words). Using linear mixed…

  15. The Convergence Model of Communication. Papers of the East-West Communication Institute, No. 18.

    ERIC Educational Resources Information Center

    Kincaid, D. Lawrence

    Expressing the need for a description of communication that is equally applicable to all the social sciences, this report develops a general model of the communication process based upon the principle of convergence as derived from basic information theory and cybernetics. It criticizes the linear, one-way models of communication that have…

  16. Dynamical Analysis in the Mathematical Modelling of Human Blood Glucose

    ERIC Educational Resources Information Center

    Bae, Saebyok; Kang, Byungmin

    2012-01-01

    We want to apply the geometrical method to a dynamical system of human blood glucose. Due to the educational importance of model building, we show a relatively general modelling process using observational facts. Next, two models of some concrete forms are analysed in the phase plane by means of linear stability, phase portrait and vector…

  17. Linear and Non-linear Information Flows In Rainfall Field

    NASA Astrophysics Data System (ADS)

    Molini, A.; La Barbera, P.; Lanza, L. G.

    The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.

  18. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  19. An integrated structural and geochemical study of fracture aperture growth in the Campito Formation of eastern California

    NASA Astrophysics Data System (ADS)

    Doungkaew, N.; Eichhubl, P.

    2015-12-01

    Processes of fracture formation control flow of fluid in the subsurface and the mechanical properties of the brittle crust. Understanding of fundamental fracture growth mechanisms is essential for understanding fracture formation and cementation in chemically reactive systems with implications for seismic and aseismic fault and fracture processes, migration of hydrocarbons, long-term CO2 storage, and geothermal energy production. A recent study on crack-seal veins in deeply buried sandstone of east Texas provided evidence for non-linear fracture growth, which is indicated by non-elliptical kinematic fracture aperture profiles. We hypothesize that similar non-linear fracture growth also occurs in other geologic settings, including under higher temperature where solution-precipitation reactions are kinetically favored. To test this hypothesis, we investigate processes of fracture growth in quartzitic sandstone of the Campito Formation, eastern California, by combining field structural observations, thin section petrography, and fluid inclusion microthermometry. Fracture aperture profile measurements of cemented opening-mode fractures show both elliptical and non-elliptical kinematic aperture profiles. In general, fractures that contain fibrous crack-seal cement have elliptical aperture profiles. Fractures filled with blocky cement have linear aperture profiles. Elliptical fracture aperture profiles are consistent with linear-elastic or plastic fracture mechanics. Linear aperture profiles may reflect aperture growth controlled by solution-precipitation creep, with the aperture distribution controlled by solution-precipitation kinetics. We hypothesize that synkinematic crack-seal cement preserves the elliptical aperture profiles of elastic fracture opening increments. Blocky cement, on the other hand, may form postkinematically relative to fracture opening, with fracture opening accommodated by continuous solution-precipitation creep.

  20. Generic approach to access barriers in dehydrogenation reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank

    The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less

  1. Generic approach to access barriers in dehydrogenation reactions

    DOE PAGES

    Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank

    2018-03-08

    The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less

  2. MSC products for the simulation of tire behavior

    NASA Technical Reports Server (NTRS)

    Muskivitch, John C.

    1995-01-01

    The modeling of tires and the simulation of tire behavior are complex problems. The MacNeal-Schwendler Corporation (MSC) has a number of finite element analysis products that can be used to address the complexities of tire modeling and simulation. While there are many similarities between the products, each product has a number of capabilities that uniquely enable it to be used for a specific aspect of tire behavior. This paper discusses the following programs: (1) MSC/NASTRAN - general purpose finite element program for linear and nonlinear static and dynamic analysis; (2) MSC/ADAQUS - nonlinear statics and dynamics finite element program; (3) MSC/PATRAN AFEA (Advanced Finite Element Analysis) - general purpose finite element program with a subset of linear and nonlinear static and dynamic analysis capabilities with an integrated version of MSC/PATRAN for pre- and post-processing; and (4) MSC/DYTRAN - nonlinear explicit transient dynamics finite element program.

  3. Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.

    2017-05-01

    The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.

  4. Massive parallelization of serial inference algorithms for a complex generalized linear model

    PubMed Central

    Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David

    2014-01-01

    Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363

  5. A note about high blood pressure in childhood

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena; Simão, Carla

    2017-06-01

    In medical, behavioral and social sciences it is usual to get a binary outcome. In the present work is collected information where some of the outcomes are binary variables (1='yes'/ 0='no'). In [14] a preliminary study about the caregivers perception of pediatric hypertension was introduced. An experimental questionnaire was designed to be answered by the caregivers of routine pediatric consultation attendees in the Santa Maria's hospital (HSM). The collected data was statistically analyzed, where a descriptive analysis and a predictive model were performed. Significant relations between some socio-demographic variables and the assessed knowledge were obtained. In [14] can be found a statistical data analysis using partial questionnaire's information. The present article completes the statistical approach estimating a model for relevant remaining questions of questionnaire by Generalized Linear Models (GLM). Exploring the binary outcome issue, we intend to extend this approach using Generalized Linear Mixed Models (GLMM), but the process is still ongoing.

  6. Occupation Time Laws for Birth and Death Processes

    DTIC Science & Technology

    1960-07-30

    given consists of the birth and death rates X,, ji,,. We present a theory in which the hypotheses (3), (4), and (6) are derived from knowledge of the...asymptotic behavior of the birth and death rates X.,A. as n -- oo. Both the methods and the results can be extended to general diffusion processes and...Linear growth. Let (110) Xn = it + a, n> 0, An = n+ b, nn> 1,jio=O. This describes a model of biological growth where the birth and death rates are

  7. Application of Mathematical Signal Processing Techniques to Mission Systems. (l’Application des techniques mathematiques du traitement du signal aux systemes de conduite des missions)

    DTIC Science & Technology

    1999-11-01

    represents the linear time invariant (LTI) response of the combined analysis /synthesis system while the second repre- sents the aliasing introduced into...effectively to implement voice scrambling systems based on time - frequency permutation . The most general form of such a system is shown in Fig. 22 where...92201 NEUILLY-SUR-SEINE CEDEX, FRANCE RTO LECTURE SERIES 216 Application of Mathematical Signal Processing Techniques to Mission Systems (1

  8. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  9. Dual-scale topology optoelectronic processor.

    PubMed

    Marsden, G C; Krishnamoorthy, A V; Esener, S C; Lee, S H

    1991-12-15

    The dual-scale topology optoelectronic processor (D-STOP) is a parallel optoelectronic architecture for matrix algebraic processing. The architecture can be used for matrix-vector multiplication and two types of vector outer product. The computations are performed electronically, which allows multiplication and summation concepts in linear algebra to be generalized to various nonlinear or symbolic operations. This generalization permits the application of D-STOP to many computational problems. The architecture uses a minimum number of optical transmitters, which thereby reduces fabrication requirements while maintaining area-efficient electronics. The necessary optical interconnections are space invariant, minimizing space-bandwidth requirements.

  10. Operational method of solution of linear non-integer ordinary and partial differential equations.

    PubMed

    Zhukovsky, K V

    2016-01-01

    We propose operational method with recourse to generalized forms of orthogonal polynomials for solution of a variety of differential equations of mathematical physics. Operational definitions of generalized families of orthogonal polynomials are used in this context. Integral transforms and the operational exponent together with some special functions are also employed in the solutions. The examples of solution of physical problems, related to such problems as the heat propagation in various models, evolutional processes, Black-Scholes-like equations etc. are demonstrated by the operational technique.

  11. Implications of discontinuous elevation gradients on fragmentation and restoration in patterned wetlands

    USGS Publications Warehouse

    Zweig, Christa L.; Reichert, Brian E.; Kitchens, Wiley M.

    2011-01-01

    Large wetlands around the world face the possibility of degradation, not only from complete conversion, but also from subtle changes in their structure and function. While fragmentation and isolation of wetlands within heterogeneous landscapes has received much attention, the disruption of spatial patterns/processes within large wetland systems and the resulting fragmentation of community components are less well documented. A greater understanding of pattern/process relationships and landscape gradients, and what occurs when they are altered, could help avoid undesirable consequences of restoration actions. The objective of this study is to determine the amount of fragmentation of sawgrass ridges due to artificial impoundment of water and how that may be differentially affected by spatial position relative to north and south levees. We also introduce groundbreaking evidence of landscape-level discontinuous elevation gradients within WCA3AS by comparing generalized linear and generalized additive models. These relatively abrupt breaks in elevation may have non-linear effects on hydrology and vegetation communities and would be crucial in restoration considerations. Modeling suggests there are abrupt breaks in elevation as a function of northing (Y-coordinate). Fragmentation indices indicate that fragmentation is a function of elevation and easting (X-coordinate), and that fragmentation has increased from 1988-2002. When landscapes change and the changes are compounded by non-linear landscape variables that are described herein, the maintenance processes change with them, creating a degraded feedback loop that alters the system's response to structuring variables and diminishes our ability to predict the effects of restoration projects or climate change. Only when these landscape variables and linkages are clearly defined can we predict the response to potential perturbations and apply the knowledge to other landscape-level wetland systems in need of future restoration.

  12. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  13. Quasi-linear versus potential-based formulations of force-flux relations and the GENERIC for irreversible processes: comparisons and examples

    NASA Astrophysics Data System (ADS)

    Hütter, Markus; Svendsen, Bob

    2013-11-01

    An essential part in modeling out-of-equilibrium dynamics is the formulation of irreversible dynamics. In the latter, the major task consists in specifying the relations between thermodynamic forces and fluxes. In the literature, mainly two distinct approaches are used for the specification of force-flux relations. On the one hand, quasi-linear relations are employed, which are based on the physics of transport processes and fluctuation-dissipation theorems (de Groot and Mazur in Non-equilibrium thermodynamics, North Holland, Amsterdam, 1962, Lifshitz and Pitaevskii in Physical kinetics. Volume 10, Landau and Lifshitz series on theoretical physics, Pergamon Press, Oxford, 1981). On the other hand, force-flux relations are also often represented in potential form with the help of a dissipation potential (Šilhavý in The mechanics and thermodynamics of continuous media, Springer, Berlin, 1997). We address the question of how these two approaches are related. The main result of this presentation states that the class of models formulated by quasi-linear relations is larger than what can be described in a potential-based formulation. While the relation between the two methods is shown in general terms, it is demonstrated also with the help of three examples. The finding that quasi-linear force-flux relations are more general than dissipation-based ones also has ramifications for the general equation for non-equilibrium reversible-irreversible coupling (GENERIC: e.g., Grmela and Öttinger in Phys Rev E 56:6620-6632, 6633-6655, 1997, Öttinger in Beyond equilibrium thermodynamics, Wiley Interscience Publishers, Hoboken, 2005). This framework has been formulated and used in two different forms, namely a quasi-linear (Öttinger and Grmela in Phys Rev E 56:6633-6655, 1997, Öttinger in Beyond equilibrium thermodynamics, Wiley Interscience Publishers, Hoboken, 2005) and a dissipation potential-based (Grmela in Adv Chem Eng 39:75-129, 2010, Grmela in J Non-Newton Fluid Mech 165:980-986, 2010, Mielke in Continuum Mech Therm 23:233-256, 2011) form, respectively, relating the irreversible evolution to the entropy gradient. It is found that also in the case of GENERIC, the quasi-linear representation encompasses a wider class of phenomena as compared to the dissipation-based formulation. Furthermore, it is found that a potential exists for the irreversible part of the GENERIC if and only if one does for the underlying force-flux relations.

  14. Entropy production and nonlinear Fokker-Planck equations.

    PubMed

    Casas, G A; Nobre, F D; Curado, E M F

    2012-12-01

    The entropy time rate of systems described by nonlinear Fokker-Planck equations--which are directly related to generalized entropic forms--is analyzed. Both entropy production, associated with irreversible processes, and entropy flux from the system to its surroundings are studied. Some examples of known generalized entropic forms are considered, and particularly, the flux and production of the Boltzmann-Gibbs entropy, obtained from the linear Fokker-Planck equation, are recovered as particular cases. Since nonlinear Fokker-Planck equations are appropriate for the dynamical behavior of several physical phenomena in nature, like many within the realm of complex systems, the present analysis should be applicable to irreversible processes in a large class of nonlinear systems, such as those described by Tsallis and Kaniadakis entropies.

  15. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  16. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  17. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  18. A road map for multi-way calibration models.

    PubMed

    Escandar, Graciela M; Olivieri, Alejandro C

    2017-08-07

    A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.

  19. Social complexity, modernity and suicide: an assessment of Durkheim's suicide from the perspective of a non-linear analysis of complex social systems.

    PubMed

    Condorelli, Rosalia

    2016-01-01

    Can we share even today the same vision of modernity which Durkheim left us by its suicide analysis? or can society 'surprise us'? The answer to these questions can be inspired by several studies which found that beginning the second half of the twentieth century suicides in western countries more industrialized and modernized do not increase in a constant, linear way as modernization and social fragmentation process increases, as well as Durkheim's theory seems to lead us to predict. Despite continued modernizing process, they found stabilizing or falling overall suicide rate trends. Therefore, a gradual process of adaptation to the stress of modernization associated to low social integration levels seems to be activated in modern society. Assuming this perspective, the paper highlights as this tendency may be understood in the light of the new concept of social systems as complex adaptive systems, systems which are able to adapt to environmental perturbations and generate as a whole surprising, emergent effects due to nonlinear interactions among their components. So, in the frame of Nonlinear Dynamical System Modeling, we formalize the logic of suicide decision-making process responsible for changes at aggregate level in suicide growth rates by a nonlinear differential equation structured in a logistic way, and in so doing we attempt to capture the mechanism underlying the change process in suicide growth rate and to test the hypothesis that system's dynamics exhibits a restrained increase process as expression of an adaptation process to the liquidity of social ties in modern society. In particular, a Nonlinear Logistic Map is applied to suicide data in a modern society such as the Italian one from 1875 to 2010. The analytic results, seeming to confirm the idea of the activation of an adaptation process to the liquidity of social ties, constitutes an opportunity for a more general reflection on the current configuration of modern society, by relating the Durkheimian Theory with the Halbwachs' Theory and most current visions of modernity such as the Baumanian one. Complexity completes the interpretative framework by rooting the generating mechanism of adaptation process in the precondition of a new General Theory of Systems making the non linearity property of social system's interactions and surprise the functioning and evolution rule of social systems.

  20. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D.; Godtliebsen, F.; Rue, H.

    2012-04-01

    Detailed knowledge of past climate variations is of high importance for gaining a better insight into the possible future climate scenarios. The relative shortness of available high quality instrumental climate data conditions the use of various climate proxy archives in making inference about past climate evolution. It, however, requires an accurate assessment of timescale errors in proxy-based paleoclimatic reconstructions. We here propose an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models constructed using tie points of mixed origin.

  1. Continuous functional magnetic resonance imaging reveals dynamic nonlinearities of "dose-response" curves for finger opposition.

    PubMed

    Berns, G S; Song, A W; Mao, H

    1999-07-15

    Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.

  2. Young children make their gestural communication systems more language-like: segmentation and linearization of semantic elements in motion events.

    PubMed

    Clay, Zanna; Pople, Sally; Hood, Bruce; Kita, Sotaro

    2014-08-01

    Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system. © The Author(s) 2014.

  3. Machine Learning-based discovery of closures for reduced models of dynamical systems

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  4. A general science-based framework for dynamical spatio-temporal models

    USGS Publications Warehouse

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic nonlinearity and demonstrate that it accommodates many different classes of scientific-based parameterizations as special cases. The model is presented in a hierarchical Bayesian framework and is illustrated with examples from ecology and oceanography. ?? 2010 Sociedad de Estad??stica e Investigaci??n Operativa.

  5. Effect of Processing Conditions on Fracture Resistance and Cohesive Laws of Binderfree All-Cellulose Composites

    NASA Astrophysics Data System (ADS)

    Goutianos, S.; Arévalo, R.; Sørensen, B. F.; Peijs, T.

    2014-12-01

    The fracture properties of all-cellulose composites without matrix were studied using Double Cantilever Beam (DCB) sandwich specimens loaded with pure monotonically increasing bending moments, which give stable crack growth. The experiments were conducted in an environmental scanning electron microscope to a) perform accurate measurements of both the fracture energy for crack initiation and the fracture resistance and b) observe the microscale failure mechanisms especially in the the wake of the crack tip. Since the mechanical behaviour of the all-cellulose composites was non-linear, a general method was first developed to obtain fracture resistance values from the DCB specimens taking into account the non-linear material response. The binderfree all-cellulose composites were prepared by a mechanical refinement process that allows the formation of intramolecular bonds between the cellulose molecules during the drying process. Defibrilation of the raw cellulose material is done in wet medium in a paper-like process. Panels with different refining time were tested and it was found than an increase in fibre fibrillation results in a lower fracture resistance.

  6. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  7. A chaotic view of behavior change: a quantum leap for health promotion.

    PubMed

    Resnicow, Ken; Vaughan, Roger

    2006-09-12

    The study of health behavior change, including nutrition and physical activity behaviors, has been rooted in a cognitive-rational paradigm. Change is conceptualized as a linear, deterministic process where individuals weigh pros and cons, and at the point at which the benefits outweigh the cost change occurs. Consistent with this paradigm, the associated statistical models have almost exclusively assumed a linear relationship between psychosocial predictors and behavior. Such a perspective however, fails to account for non-linear, quantum influences on human thought and action. Consider why after years of false starts and failed attempts, a person succeeds at increasing their physical activity, eating healthier or losing weight. Or, why after years of success a person relapses. This paper discusses a competing view of health behavior change that was presented at the 2006 annual ISBNPA meeting in Boston. Rather than viewing behavior change from a linear perspective it can be viewed as a quantum event that can be understood through the lens of Chaos Theory and Complex Dynamic Systems. Key principles of Chaos Theory and Complex Dynamic Systems relevant to understanding health behavior change include: 1) Chaotic systems can be mathematically modeled but are nearly impossible to predict; 2) Chaotic systems are sensitive to initial conditions; 3) Complex Systems involve multiple component parts that interact in a nonlinear fashion; and 4) The results of Complex Systems are often greater than the sum of their parts. Accordingly, small changes in knowledge, attitude, efficacy, etc may dramatically alter motivation and behavioral outcomes. And the interaction of such variables can yield almost infinite potential patterns of motivation and behavior change. In the linear paradigm unaccounted for variance is generally relegated to the catch all "error" term, when in fact such "error" may represent the chaotic component of the process. The linear and chaotic paradigms are however, not mutually exclusive, as behavior change may include both chaotic and cognitive processes. Studies of addiction suggest that many decisions to change are quantum rather than planned events; motivation arrives as opposed to being planned. Moreover, changes made through quantum processes appear more enduring than those that involve more rational, planned processes. How such processes may apply to nutrition and physical activity behavior and related interventions merits examination.

  8. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    NASA Astrophysics Data System (ADS)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  9. The hydrodeoxygenation of bioderived furans into alkanes.

    PubMed

    Sutton, Andrew D; Waldie, Fraser D; Wu, Ruilian; Schlaf, Marcel; Silks, Louis A Pete; Gordon, John C

    2013-05-01

    The conversion of biomass into fuels and chemical feedstocks is one part of a drive to reduce the world's dependence on crude oil. For transportation fuels in particular, wholesale replacement of a fuel is logistically problematic, not least because of the infrastructure that is already in place. Here, we describe the catalytic defunctionalization of a series of biomass-derived molecules to provide linear alkanes suitable for use as transportation fuels. These biomass-derived molecules contain a variety of functional groups, including olefins, furan rings and carbonyl groups. We describe the removal of these in either a stepwise process or a one-pot process using common reagents and catalysts under mild reaction conditions to provide n-alkanes in good yields and with high selectivities. Our general synthetic approach is applicable to a range of precursors with different carbon content (chain length). This allows the selective generation of linear alkanes with carbon chain lengths between eight and sixteen carbons.

  10. The hydrodeoxygenation of bioderived furans into alkanes

    NASA Astrophysics Data System (ADS)

    Sutton, Andrew D.; Waldie, Fraser D.; Wu, Ruilian; Schlaf, Marcel; ‘Pete' Silks, Louis A.; Gordon, John C.

    2013-05-01

    The conversion of biomass into fuels and chemical feedstocks is one part of a drive to reduce the world's dependence on crude oil. For transportation fuels in particular, wholesale replacement of a fuel is logistically problematic, not least because of the infrastructure that is already in place. Here, we describe the catalytic defunctionalization of a series of biomass-derived molecules to provide linear alkanes suitable for use as transportation fuels. These biomass-derived molecules contain a variety of functional groups, including olefins, furan rings and carbonyl groups. We describe the removal of these in either a stepwise process or a one-pot process using common reagents and catalysts under mild reaction conditions to provide n-alkanes in good yields and with high selectivities. Our general synthetic approach is applicable to a range of precursors with different carbon content (chain length). This allows the selective generation of linear alkanes with carbon chain lengths between eight and sixteen carbons.

  11. Controlled Folding, Motional, and Constitutional Dynamic Processes of Polyheterocyclic Molecular Strands.

    PubMed

    Barboiu, Mihail; Stadler, Adrian-Mihail; Lehn, Jean-Marie

    2016-03-18

    General design principles have been developed for the control of the structural features of polyheterocyclic strands and their effector-modulated shape changes. Induced defined molecular motions permit designed enforcement of helical as well as linear molecular shapes. The ability of such molecular strands to bind metal cations allows the generation of coiling/uncoiling processes between helically folded and extended linear states. Large molecular motions are produced on coordination of metal ions, which may be made reversible by competition with an ancillary complexing agent and fueled by sequential acid/base neutralization energy. The introduction of hydrazone units into the strands confers upon them constitutional dynamics, whereby interconversion between different strand compositions is achieved through component exchange. These features have relevance for nanomechanical devices. We present a morphological and functional analysis of such systems developed in our laboratories. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A study of the linear free energy model for DNA structures using the generalized Hamiltonian formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yavari, M., E-mail: yavari@iaukashan.ac.ir

    2016-06-15

    We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.

  13. Interpreting the g loadings of intelligence test composite scores in light of Spearman's law of diminishing returns.

    PubMed

    Reynolds, Matthew R

    2013-03-01

    The linear loadings of intelligence test composite scores on a general factor (g) have been investigated recently in factor analytic studies. Spearman's law of diminishing returns (SLODR), however, implies that the g loadings of test scores likely decrease in magnitude as g increases, or they are nonlinear. The purpose of this study was to (a) investigate whether the g loadings of composite scores from the Differential Ability Scales (2nd ed.) (DAS-II, C. D. Elliott, 2007a, Differential Ability Scales (2nd ed.). San Antonio, TX: Pearson) were nonlinear and (b) if they were nonlinear, to compare them with linear g loadings to demonstrate how SLODR alters the interpretation of these loadings. Linear and nonlinear confirmatory factor analysis (CFA) models were used to model Nonverbal Reasoning, Verbal Ability, Visual Spatial Ability, Working Memory, and Processing Speed composite scores in four age groups (5-6, 7-8, 9-13, and 14-17) from the DAS-II norming sample. The nonlinear CFA models provided better fit to the data than did the linear models. In support of SLODR, estimates obtained from the nonlinear CFAs indicated that g loadings decreased as g level increased. The nonlinear portion for the nonverbal reasoning loading, however, was not statistically significant across the age groups. Knowledge of general ability level informs composite score interpretation because g is less likely to produce differences, or is measured less, in those scores at higher g levels. One implication is that it may be more important to examine the pattern of specific abilities at higher general ability levels. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  14. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    NASA Technical Reports Server (NTRS)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  15. Equivalence between a generalized dendritic network and a set of one-dimensional networks as a ground of linear dynamics.

    PubMed

    Koda, Shin-ichi

    2015-05-28

    It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.

  16. 500 GHz Optical Sampler for Advancing Nonlinear Processing with Generalized Optical Pulses

    DTIC Science & Technology

    2015-10-19

    that obtainable with electronics. Wide bandwidth  pulses have a variety of applications such as in microwave signal processing,  ultra ‐ wideband ...fiber‐based entangled photon source, the first  ultra ‐fast low‐loss single photon switch, and the first  telecom‐band linear optics C‐Not gate. We

  17. Parallel dynamics between non-Hermitian and Hermitian systems

    NASA Astrophysics Data System (ADS)

    Wang, P.; Lin, S.; Jin, L.; Song, Z.

    2018-06-01

    We reveals a connection between non-Hermitian and Hermitian systems by studying the connection between a family of non-Hermitian and Hermitian Hamiltonians based on exact solutions. In general, for a dynamic process in a non-Hermitian system H , there always exists a parallel dynamic process governed by the corresponding Hermitian conjugate system H†. We show that a linear superposition of the two parallel dynamics is exactly equivalent to the time evolution of a state under a Hermitian Hamiltonian H , and we present the relations between {H ,H ,H†} .

  18. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  19. A general and robust strategy for the synthesis of nearly monodisperse colloidal nanocrystals

    NASA Astrophysics Data System (ADS)

    Pang, Xinchang; Zhao, Lei; Han, Wei; Xin, Xukai; Lin, Zhiqun

    2013-06-01

    Colloidal nanocrystals exhibit a wide range of size- and shape-dependent properties and have found application in myriad fields, incuding optics, electronics, mechanics, drug delivery and catalysis, to name but a few. Synthetic protocols that enable the simple and convenient production of colloidal nanocrystals with controlled size, shape and composition are therefore of key general importance. Current strategies include organic solution-phase synthesis, thermolysis of organometallic precursors, sol-gel processes, hydrothermal reactions and biomimetic and dendrimer templating. Often, however, these procedures require stringent experimental conditions, are difficult to generalize, or necessitate tedious multistep reactions and purification. Recently, linear amphiphilic block co-polymer micelles have been used as templates to synthesize functional nanocrystals, but the thermodynamic instability of these micelles limits the scope of this approach. Here, we report a general strategy for crafting a large variety of functional nanocrystals with precisely controlled dimensions, compositions and architectures by using star-like block co-polymers as nanoreactors. This new class of co-polymers forms unimolecular micelles that are structurally stable, therefore overcoming the intrinsic instability of linear block co-polymer micelles. Our approach enables the facile synthesis of organic solvent- and water-soluble nearly monodisperse nanocrystals with desired composition and architecture, including core-shell and hollow nanostructures. We demonstrate the generality of our approach by describing, as examples, the synthesis of various sizes and architectures of metallic, ferroelectric, magnetic, semiconductor and luminescent colloidal nanocrystals.

  20. A quasi-likelihood approach to non-negative matrix factorization

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511

  1. Accurate ocean bottom seismometer positioning method inspired by multilateration technique

    USGS Publications Warehouse

    Benazzouz, Omar; Pinheiro, Luis M.; Matias, Luis M. A.; Afilhado, Alexandra; Herold, Daniel; Haines, Seth S.

    2018-01-01

    The positioning of ocean bottom seismometers (OBS) is a key step in the processing flow of OBS data, especially in the case of self popup types of OBS instruments. The use of first arrivals from airgun shots, rather than relying on the acoustic transponders mounted in the OBS, is becoming a trend and generally leads to more accurate positioning due to the statistics from a large number of shots. In this paper, a linearization of the OBS positioning problem via the multilateration technique is discussed. The discussed linear solution solves jointly for the average water layer velocity and the OBS position using only shot locations and first arrival times as input data.

  2. Unreliable gut feelings can lead to correct decisions: the somatic marker hypothesis in non-linear decision chains.

    PubMed

    Bedia, Manuel G; Di Paolo, Ezequiel

    2012-01-01

    Dual-process approaches of decision-making examine the interaction between affective/intuitive and deliberative processes underlying value judgment. From this perspective, decisions are supported by a combination of relatively explicit capabilities for abstract reasoning and relatively implicit evolved domain-general as well as learned domain-specific affective responses. One such approach, the somatic markers hypothesis (SMH), expresses these implicit processes as a system of evolved primary emotions supplemented by associations between affect and experience that accrue over lifetime, or somatic markers. In this view, somatic markers are useful only if their local capability to predict the value of an action is above a baseline equal to the predictive capability of the combined rational and primary emotional subsystems. We argue that decision-making has often been conceived of as a linear process: the effect of decision sequences is additive, local utility is cumulative, and there is no strong environmental feedback. This widespread assumption can have consequences for answering questions regarding the relative weight between the systems and their interaction within a cognitive architecture. We introduce a mathematical formalization of the SMH and study it in situations of dynamic, non-linear decision chains using a discrete-time stochastic model. We find, contrary to expectations, that decision-making events can interact non-additively with the environment in apparently paradoxical ways. We find that in non-lethal situations, primary emotions are represented globally over and above their local weight, showing a tendency for overcautiousness in situated decision chains. We also show that because they tend to counteract this trend, poorly attuned somatic markers that by themselves do not locally enhance decision-making, can still produce an overall positive effect. This result has developmental and evolutionary implications since, by promoting exploratory behavior, somatic markers would seem to be beneficial even at early stages when experiential attunement is poor. Although the model is formulated in terms of the SMH, the implications apply to dual systems theories in general since it makes minimal assumptions about the nature of the processes involved.

  3. Unreliable Gut Feelings Can Lead to Correct Decisions: The Somatic Marker Hypothesis in Non-Linear Decision Chains

    PubMed Central

    Bedia, Manuel G.; Di Paolo, Ezequiel

    2012-01-01

    Dual-process approaches of decision-making examine the interaction between affective/intuitive and deliberative processes underlying value judgment. From this perspective, decisions are supported by a combination of relatively explicit capabilities for abstract reasoning and relatively implicit evolved domain-general as well as learned domain-specific affective responses. One such approach, the somatic markers hypothesis (SMH), expresses these implicit processes as a system of evolved primary emotions supplemented by associations between affect and experience that accrue over lifetime, or somatic markers. In this view, somatic markers are useful only if their local capability to predict the value of an action is above a baseline equal to the predictive capability of the combined rational and primary emotional subsystems. We argue that decision-making has often been conceived of as a linear process: the effect of decision sequences is additive, local utility is cumulative, and there is no strong environmental feedback. This widespread assumption can have consequences for answering questions regarding the relative weight between the systems and their interaction within a cognitive architecture. We introduce a mathematical formalization of the SMH and study it in situations of dynamic, non-linear decision chains using a discrete-time stochastic model. We find, contrary to expectations, that decision-making events can interact non-additively with the environment in apparently paradoxical ways. We find that in non-lethal situations, primary emotions are represented globally over and above their local weight, showing a tendency for overcautiousness in situated decision chains. We also show that because they tend to counteract this trend, poorly attuned somatic markers that by themselves do not locally enhance decision-making, can still produce an overall positive effect. This result has developmental and evolutionary implications since, by promoting exploratory behavior, somatic markers would seem to be beneficial even at early stages when experiential attunement is poor. Although the model is formulated in terms of the SMH, the implications apply to dual systems theories in general since it makes minimal assumptions about the nature of the processes involved. PMID:23087655

  4. Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR

    NASA Astrophysics Data System (ADS)

    Ma, Yongjun

    The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.

  5. Sea surface temperature anomalies, planetary waves, and air-sea feedback in the middle latitudes

    NASA Technical Reports Server (NTRS)

    Frankignoul, C.

    1985-01-01

    Current analytical models for large-scale air-sea interactions in the middle latitudes are reviewed in terms of known sea-surface temperature (SST) anomalies. The scales and strength of different atmospheric forcing mechanisms are discussed, along with the damping and feedback processes controlling the evolution of the SST. Difficulties with effective SST modeling are described in terms of the techniques and results of case studies, numerical simulations of mixed-layer variability and statistical modeling. The relationship between SST and diabatic heating anomalies is considered and a linear model is developed for the response of the stationary atmosphere to the air-sea feedback. The results obtained with linear wave models are compared with the linear model results. Finally, sample data are presented from experiments with general circulation models into which specific SST anomaly data for the middle latitudes were introduced.

  6. A one-dimensional nonlinear problem of thermoelasticity in extended thermodynamics

    NASA Astrophysics Data System (ADS)

    Rawy, E. K.

    2018-06-01

    We solve a nonlinear, one-dimensional initial boundary-value problem of thermoelasticity in generalized thermodynamics. A Cattaneo-type evolution equation for the heat flux is used, which differs from the one used extensively in the literature. The hyperbolic nature of the associated linear system is clarified through a study of the characteristic curves. Progressive wave solutions with two finite speeds are noted. A numerical treatment is presented for the nonlinear system using a three-step, quasi-linearization, iterative finite-difference scheme for which the linear system of equations is the initial step in the iteration. The obtained results are discussed in detail. They clearly show the hyperbolic nature of the system, and may be of interest in investigating thermoelastic materials, not only at low temperatures, but also during high temperature processes involving rapid changes in temperature as in laser treatment of surfaces.

  7. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  8. Evaluation of a Nonlinear Finite Element Program - ABAQUS.

    DTIC Science & Technology

    1983-03-15

    anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has

  9. Method for transition prediction in high-speed boundary layers, phase 2

    NASA Astrophysics Data System (ADS)

    Herbert, T.; Stuckert, G. K.; Lin, N.

    1993-09-01

    The parabolized stability equations (PSE) are a new and more reliable approach to analyzing the stability of streamwise varying flows such as boundary layers. This approach has been previously validated for idealized incompressible flows. Here, the PSE are formulated for highly compressible flows in general curvilinear coordinates to permit the analysis of high-speed boundary-layer flows over fairly general bodies. Vigorous numerical studies are carried out to study convergence and accuracy of the linear-stability code LSH and the linear/nonlinear PSE code PSH. Physical interfaces are set up to analyze the M = 8 boundary layer over a blunt cone calculated by using a thin-layer Navier Stokes (TNLS) code and the flow over a sharp cone at angle of attack calculated using the AFWAL parabolized Navier-Stokes (PNS) code. While stability and transition studies at high speeds are far from routine, the method developed here is the best tool available to research the physical processes in high-speed boundary layers.

  10. Novel and general approach to linear filter design for contrast-to-noise ratio enhancement of magnetic resonance images with multiple interfering features in the scene

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.

    1992-04-01

    Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.

  11. Linear discrete systems with memory: a generalization of the Langmuir model

    NASA Astrophysics Data System (ADS)

    Băleanu, Dumitru; Nigmatullin, Raoul R.

    2013-10-01

    In this manuscript we analyzed a general solution of the linear nonlocal Langmuir model within time scale calculus. Several generalizations of the Langmuir model are presented together with their exact corresponding solutions. The physical meaning of the proposed models are investigated and their corresponding geometries are reported.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Qichun; Zhang, Xuesong; Xu, Xingya

    Riverine carbon cycling is an important, but insufficiently investigated component of the global carbon cycle. Analyses of environmental controls on riverine carbon cycling are critical for improved understanding of mechanisms regulating carbon processing and storage along the terrestrial-aquatic continuum. Here, we compile and analyze riverine dissolved organic carbon (DOC) concentration data from 1402 United States Geological Survey (USGS) gauge stations to examine the spatial variability and environmental controls of DOC concentrations in the United States (U.S.) surface waters. DOC concentrations exhibit high spatial variability, with an average of 6.42 ± 6.47 mg C/ L (Mean ± Standard Deviation). In general,more » high DOC concentrations occur in the Upper Mississippi River basin and the Southeastern U.S., while low concentrations are mainly distributed in the Western U.S. Single-factor analysis indicates that slope of drainage areas, wetlands, forests, percentage of first-order streams, and instream nutrients (such as nitrogen and phosphorus) pronouncedly influence DOC concentrations, but the explanatory power of each bivariate model is lower than 35%. Analyses based on the general multi-linear regression models suggest DOC concentrations are jointly impacted by multiple factors. Soil properties mainly show positive correlations with DOC concentrations; forest and shrub lands have positive correlations with DOC concentrations, but urban area and croplands demonstrate negative impacts; total instream phosphorus and dam density correlate positively with DOC concentrations. Notably, the relative importance of these environmental controls varies substantially across major U.S. water resource regions. In addition, DOC concentrations and environmental controls also show significant variability from small streams to large rivers, which may be caused by changing carbon sources and removal rates by river orders. In sum, our results reveal that general multi-linear regression analysis of twenty one terrestrial and aquatic environmental factors can partially explain (56%) the DOC concentration variation. In conclusion, this study highlights the complexity of the interactions among these environmental factors in determining DOC concentrations, thus calls for processes-based, non-linear methodologies to constrain uncertainties in riverine DOC cycling.« less

  13. Global GNSS processing based on the raw observation approach

    NASA Astrophysics Data System (ADS)

    Strasser, Sebastian; Zehentner, Norbert; Mayer-Gürr, Torsten

    2017-04-01

    Many global navigation satellite system (GNSS) applications, e.g. Precise Point Positioning (PPP), require high-quality GNSS products, such as precise GNSS satellite orbits and clocks. These products are routinely determined by analysis centers of the International GNSS Service (IGS). The current processing methods of the analysis centers make use of the ionosphere-free linear combination to reduce the ionospheric influence. Some of the analysis centers also form observation differences, in general double-differences, to eliminate several additional error sources. The raw observation approach is a new GNSS processing approach that was developed at Graz University of Technology for kinematic orbit determination of low Earth orbit (LEO) satellites and subsequently adapted to global GNSS processing in general. This new approach offers some benefits compared to well-established approaches, such as a straightforward incorporation of new observables due to the avoidance of observation differences and linear combinations. This becomes especially important in view of the changing GNSS landscape with two new systems, the European system Galileo and the Chinese system BeiDou, currently in deployment. GNSS products generated at Graz University of Technology using the raw observation approach currently comprise precise GNSS satellite orbits and clocks, station positions and clocks, code and phase biases, and Earth rotation parameters. To evaluate the new approach, products generated using the Global Positioning System (GPS) constellation and observations from the global IGS station network are compared to those of the IGS analysis centers. The comparisons show that the products generated at Graz University of Technology are on a similar level of quality to the products determined by the IGS analysis centers. This confirms that the raw observation approach is applicable to global GNSS processing. Some areas requiring further work have been identified, enabling future improvements of the method.

  14. The Effect of a Workplace-Based Early Intervention Program on Work-Related Musculoskeletal Compensation Outcomes at a Poultry Meat Processing Plant.

    PubMed

    Donovan, Michael; Khan, Asaduzzaman; Johnston, Venerina

    2017-03-01

    Introduction The aim of this study is to determine whether a workplace-based early intervention injury prevention program reduces work-related musculoskeletal compensation outcomes in poultry meat processing workers. Methods A poultry meatworks in Queensland, Australia implemented an onsite early intervention which included immediate reporting and triage, reassurance, multidisciplinary participatory consultation, workplace modifica tion and onsite physiotherapy. Secondary pre-post analyses of the meatworks' compensation data over 4 years were performed, with the intervention commencing 2 years into the study period. Outcome measures included rate of claims, costs per claim and work days absent at an individual claim level. Where possible, similar analyses were performed on data for Queensland's poultry meat processing industry (excluding the meatworks used in this study). Results At the intervention meatworks, in the post intervention period an 18 % reduction in claims per 1 million working hours (p = 0.017) was observed. Generalized linear modelling revealed a significant reduction in average costs per claim of $831 (OR 0.74; 95 % CI 0.59-0.93; p = 0.009). Median days absent was reduced by 37 % (p = 0.024). For the poultry meat processing industry over the same period, generalized linear modelling revealed no significant change in average costs per claim (OR 1.02; 95 % CI 0.76-1.36; p = 0.91). Median days absent was unchanged (p = 0.93). Conclusion The introduction of an onsite, workplace-based early intervention injury prevention program demonstrated positive effects on compensation outcomes for work-related musculoskeletal disorders in poultry meat processing workers. Prospective studies are needed to confirm the findings of the present study.

  15. An approximate generalized linear model with random effects for informative missing data.

    PubMed

    Follmann, D; Wu, M

    1995-03-01

    This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.

  16. Structured penalties for functional linear models-partially empirical eigenvectors for regression.

    PubMed

    Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding

    2012-01-01

    One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.

  17. A survey of the state of the art and focused research in range systems, task 2

    NASA Technical Reports Server (NTRS)

    Yao, K.

    1986-01-01

    Many communication, control, and information processing subsystems are modeled by linear systems incorporating tapped delay lines (TDL). Such optimized subsystems result in full precision multiplications in the TDL. In order to reduce complexity and cost in a microprocessor implementation, these multiplications can be replaced by single-shift instructions which are equivalent to powers of two multiplications. Since, in general, the obvious operation of rounding the infinite precision TDL coefficients to the nearest powers of two usually yield quite poor system performance, the optimum powers of two coefficient solution was considered. Detailed explanations on the use of branch-and-bound algorithms for finding the optimum powers of two solutions are given. Specific demonstration of this methodology to the design of a linear data equalizer and its implementation in assembly language on a 8080 microprocessor with a 12 bit A/D converter are reported. This simple microprocessor implementation with optimized TDL coefficients achieves a system performance comparable to the optimum linear equalization with full precision multiplications for an input data rate of 300 baud. The philosophy demonstrated in this implementation is dully applicable to many other microprocessor controlled information processing systems.

  18. Detection and Characterization of Exoplanets using Projections on Karhunen-Loeve Eigenimages: Forward Modeling

    NASA Astrophysics Data System (ADS)

    Pueyo, Laurent

    2016-01-01

    A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.

  19. Resultant as the determinant of a Koszul complex

    NASA Astrophysics Data System (ADS)

    Anokhina, A. S.; Morozov, A. Yu.; Shakirov, Sh. R.

    2009-09-01

    The determinant is a very important characteristic of a linear map between vector spaces. Two generalizations of linear maps are intensively used in modern theory: linear complexes (nilpotent chains of linear maps) and nonlinear maps. The determinant of a complex and the resultant are then the corresponding generalizations of the determinant of a linear map. It turns out that these two quantities are related: the resultant of a nonlinear map is the determinant of the corresponding Koszul complex. We give an elementary introduction into these notions and relations, which will definitely play a role in the future development of theoretical physics.

  20. Towards a non-linear theory for fluid pressure and osmosis in shales

    NASA Astrophysics Data System (ADS)

    Droghei, Riccardo; Salusti, Ettore

    2015-04-01

    In exploiting deep hydrocarbon reservoirs, often injections of fluid and/or solute are used. To control and avoid troubles as fluid and gas unexpected diffusions, a reservoir characterization can be obtained also from observations of space and time evolution of micro-earthquake clouds resulting from such injections. This is important since several among the processes caused by fluid injections can modify the deep matrix. Information about the evolution of such micro-seismicity clouds therefore plays a realistic role in the reservoir analyses. To reach a better insight about such processes, and obtain a better system control, we here analyze the initial stress necessary to originate strong non linear transients of combined fluid pressure and solute density (osmosis) in a porous matrix. All this can indeed perturb in a mild (i.e. a linear diffusion) or dramatic non linear way the rock structure, till inducing rock deformations, micro-earthquakes or fractures. I more detail we here assume first a linear Hooke law relating strain, stress, solute density and fluid pressure, and analyze their effect in the porous rock dynamics. Then we analyze its generalization, i.e. the further non linear effect of a stronger external pressure, also in presence of a trend of pressure or solute in the whole region. We moreover characterize the zones where a sudden arrival of such a front can cause micro-earthquakes or fractures. All this allows to reach a novel, more realistic insight about the control of rock evolution in presence of strong pressure fronts. We thus obtain a more efficient reservoir control to avoid large geological perturbations. It is of interest that our results are very similar to those found by Shapiro et al.(2013) with a different approach.

  1. Power and process: The politics of electricity sector reform in Uganda

    NASA Astrophysics Data System (ADS)

    Gore, Christopher David

    In 2007, Uganda had one of the lowest levels of access to electricity in the world. Given the influence of multilateral and bilateral agencies in Uganda; the strong international reputation and domestic influence of its President; the country's historic achievements in public sector and economic reform; and the intimate connection between economic performance, social well-being and access to electricity, the problems with Uganda's electricity sector have proven deeply frustrating and, indeed, puzzling. Following increased scholarly attention to the relationship between political change, policymaking, and public sector reform in sub-Saharan Africa and the developing world generally, this thesis examines the multilevel politics of Uganda's electricity sector reform process. This study contends that explanations for Uganda's electricity sector reform problems generally, and hydroelectric dam construction efforts specifically, must move beyond technical and financial factors. Problems in this sector have also been the result of a model of reform (promoted by the World Bank) that failed adequately to account for the character of political change. Indeed, the model of reform that was promoted and implemented was risky and it was deeply antagonistic to domestic and international civil society organizations. In addition, it was presented as a linear, technical, apolitical exercise. Finally the model was inconsistent with key principles the Bank itself, and public policy literature generally, suggest are needed for success. Based on this analysis, the thesis contends that policymaking and reform must be understood as deeply political processes, which not only define access to services, but also participation in, and exclusion from, national debates. Future approaches to reform and policymaking must anticipate the complex, multilevel, non-linear character of 'second-generation' policy issues like electricity, and the political and institutional capacity needed to increase the potential for success. At the heart of this approach is a need to carefully consider how the character of state-society relations in the country---"governance"---will influence reform processes and outcomes.

  2. Bisimulation equivalence of differential-algebraic systems

    NASA Astrophysics Data System (ADS)

    Megawati, Noorma Yulia; Schaft, Arjan van der

    2018-01-01

    In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.

  3. Assessment of bias correction under transient climate change

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2015-04-01

    Calibration of climate simulations is necessary since large systematic discrepancies are generally found between the model climate and the observed climate. Recent studies have cast doubt upon the common assumption of the bias being stationary when the climate changes. This led to the development of new methods, mostly based on linear sensitivity of the biases as a function of time or forcing (Kharin et al. 2012). However, recent studies uncovered more fundamental problems using both low-order systems (Vannitsem 2011) and climate models, showing that the biases may display complicated non-linear variations under climate change. This last analysis focused on biases derived from the equilibrium climate sensitivity, thereby ignoring the effect of the transient climate sensitivity. Based on the linear response theory, a general method of bias correction is therefore proposed that can be applied on any climate forcing scenario. The validity of the method is addressed using twin experiments with a climate model of intermediate complexity LOVECLIM (Goosse et al., 2010). We evaluate to what extent the bias change is sensitive to the structure (frequency) of the applied forcing (here greenhouse gases) and whether the linear response theory is valid for global and/or local variables. To answer these question we perform large-ensemble simulations using different 300-year scenarios of forced carbon-dioxide concentrations. Reality and simulations are assumed to differ by a model error emulated as a parametric error in the wind drag or in the radiative scheme. References [1] H. Goosse et al., 2010: Description of the Earth system model of intermediate complexity LOVECLIM version 1.2, Geosci. Model Dev., 3, 603-633. [2] S. Vannitsem, 2011: Bias correction and post-processing under climate change, Nonlin. Processes Geophys., 18, 911-924. [3] V.V. Kharin, G. J. Boer, W. J. Merryfield, J. F. Scinocca, and W.-S. Lee, 2012: Statistical adjustment of decadal predictions in a changing climate, Geophys. Res. Lett., 39, L19705.

  4. Using Linear and Quadratic Functions to Teach Number Patterns in Secondary School

    ERIC Educational Resources Information Center

    Kenan, Kok Xiao-Feng

    2017-01-01

    This paper outlines an approach to definitively find the general term in a number pattern, of either a linear or quadratic form, by using the general equation of a linear or quadratic function. This approach is governed by four principles: (1) identifying the position of the term (input) and the term itself (output); (2) recognising that each…

  5. How to characterize a nonlinear elastic material? A review on nonlinear constitutive parameters in isotropic finite elasticity

    PubMed Central

    2017-01-01

    The mechanical response of a homogeneous isotropic linearly elastic material can be fully characterized by two physical constants, the Young’s modulus and the Poisson’s ratio, which can be derived by simple tensile experiments. Any other linear elastic parameter can be obtained from these two constants. By contrast, the physical responses of nonlinear elastic materials are generally described by parameters which are scalar functions of the deformation, and their particular choice is not always clear. Here, we review in a unified theoretical framework several nonlinear constitutive parameters, including the stretch modulus, the shear modulus and the Poisson function, that are defined for homogeneous isotropic hyperelastic materials and are measurable under axial or shear experimental tests. These parameters represent changes in the material properties as the deformation progresses, and can be identified with their linear equivalent when the deformations are small. Universal relations between certain of these parameters are further established, and then used to quantify nonlinear elastic responses in several hyperelastic models for rubber, soft tissue and foams. The general parameters identified here can also be viewed as a flexible basis for coupling elastic responses in multi-scale processes, where an open challenge is the transfer of meaningful information between scales. PMID:29225507

  6. Zero-dynamics principle for perfect quantum memory in linear networks

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoki; James, Matthew R.

    2014-07-01

    In this paper, we study a general linear networked system that contains a tunable memory subsystem; that is, it is decoupled from an optical field for state transportation during the storage process, while it couples to the field during the writing or reading process. The input is given by a single photon state or a coherent state in a pulsed light field. We then completely and explicitly characterize the condition required on the pulse shape achieving the perfect state transfer from the light field to the memory subsystem. The key idea to obtain this result is the use of zero-dynamics principle, which in our case means that, for perfect state transfer, the output field during the writing process must be a vacuum. A useful interpretation of the result in terms of the transfer function is also given. Moreover, a four-node network composed of atomic ensembles is studied as an example, demonstrating how the input field state is transferred to the memory subsystem and what the input pulse shape to be engineered for perfect memory looks like.

  7. Quantum corrections to the generalized Proca theory via a matter field

    NASA Astrophysics Data System (ADS)

    Amado, André; Haghani, Zahra; Mohammadi, Azadeh; Shahidi, Shahab

    2017-09-01

    We study the quantum corrections to the generalized Proca theory via matter loops. We consider two types of interactions, linear and nonlinear in the vector field. Calculating the one-loop correction to the vector field propagator, three- and four-point functions, we show that the non-linear interactions are harmless, although they renormalize the theory. The linear matter-vector field interactions introduce ghost degrees of freedom to the generalized Proca theory. Treating the theory as an effective theory, we calculate the energy scale up to which the theory remains healthy.

  8. Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Barsoum, N.

    2010-06-01

    In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.

  9. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    PubMed

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  10. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  11. On the interpretation of weight vectors of linear models in multivariate neuroimaging.

    PubMed

    Haufe, Stefan; Meinecke, Frank; Görgen, Kai; Dähne, Sven; Haynes, John-Dylan; Blankertz, Benjamin; Bießmann, Felix

    2014-02-15

    The increase in spatiotemporal resolution of neuroimaging devices is accompanied by a trend towards more powerful multivariate analysis methods. Often it is desired to interpret the outcome of these methods with respect to the cognitive processes under study. Here we discuss which methods allow for such interpretations, and provide guidelines for choosing an appropriate analysis for a given experimental goal: For a surgeon who needs to decide where to remove brain tissue it is most important to determine the origin of cognitive functions and associated neural processes. In contrast, when communicating with paralyzed or comatose patients via brain-computer interfaces, it is most important to accurately extract the neural processes specific to a certain mental state. These equally important but complementary objectives require different analysis methods. Determining the origin of neural processes in time or space from the parameters of a data-driven model requires what we call a forward model of the data; such a model explains how the measured data was generated from the neural sources. Examples are general linear models (GLMs). Methods for the extraction of neural information from data can be considered as backward models, as they attempt to reverse the data generating process. Examples are multivariate classifiers. Here we demonstrate that the parameters of forward models are neurophysiologically interpretable in the sense that significant nonzero weights are only observed at channels the activity of which is related to the brain process under study. In contrast, the interpretation of backward model parameters can lead to wrong conclusions regarding the spatial or temporal origin of the neural signals of interest, since significant nonzero weights may also be observed at channels the activity of which is statistically independent of the brain process under study. As a remedy for the linear case, we propose a procedure for transforming backward models into forward models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  12. System and method for generating 3D images of non-linear properties of rock formation using surface seismic or surface to borehole seismic or both

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vu, Cung Khac; Nihei, Kurt Toshimi; Johnson, Paul A.

    A system and method of characterizing properties of a medium from a non-linear interaction are include generating, by first and second acoustic sources disposed on a surface of the medium on a first line, first and second acoustic waves. The first and second acoustic sources are controllable such that trajectories of the first and second acoustic waves intersect in a mixing zone within the medium. The method further includes receiving, by a receiver positioned in a plane containing the first and second acoustic sources, a third acoustic wave generated by a non-linear mixing process from the first and second acousticmore » waves in the mixing zone; and creating a first two-dimensional image of non-linear properties or a first ratio of compressional velocity and shear velocity, or both, of the medium in a first plane generally perpendicular to the surface and containing the first line, based on the received third acoustic wave.« less

  13. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  14. A unified view on weakly correlated recurrent networks

    PubMed Central

    Grytskyy, Dmytro; Tetzlaff, Tom; Diesmann, Markus; Helias, Moritz

    2013-01-01

    The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein–Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra. PMID:24151463

  15. Powerless fluxes and forces, and change of scale in irreversible thermodynamics

    NASA Astrophysics Data System (ADS)

    Ostoja-Starzewski, M.; Zubelewicz, A.

    2011-08-01

    We show that the dissipation function of linear processes in continuum thermomechanics may be treated as the average of the statistically fluctuating dissipation rate on either coarse or small spatial scales. The first case involves thermodynamic orthogonality due to Ziegler, while the second one involves powerless forces in a general solution of the Clausius-Duhem inequality according to Poincaré and Edelen. This formulation is demonstrated using the example of parabolic versus hyperbolic heat conduction. The existence of macroscopic powerless heat fluxes is traced here to the hidden dissipative processes at lower temporal and spatial scales.

  16. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  17. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  18. Diffusion Coefficients of a Non-Linear Astrophysical Process: Luis Carrasco's Scientific (and other) Contributions

    NASA Astrophysics Data System (ADS)

    Aguilar, L. A.

    2009-11-01

    The Luis Carrasco phenomenon in Astrophysics is a widespread event that has appeared in many branches of theoretical and observational Astronomy, as well as in astronomical instrumentation. It is an ubiquitous and highly non-linear effect with multiple coupling constants. To understand it, it is necessary to dwell, not only into many areas of Astronomy, but of human culture and knowledge in general. Some authors believe that it is only through the ``many-worlds'' interpretation of Quantum Mechanics, that this effect can be understood. In this work, we will demonstrate its fractal nature, present a panoramic view of this global effect, and estimate its diffusion coefficients in the regular and irregular regimes. Connections with areas outside Astronomy will be shown.

  19. Block Gauss elimination followed by a classical iterative method for the solution of linear systems

    NASA Astrophysics Data System (ADS)

    Alanelli, Maria; Hadjidimos, Apostolos

    2004-02-01

    In the last two decades many papers have appeared in which the application of an iterative method for the solution of a linear system is preceded by a step of the Gauss elimination process in the hope that this will increase the rates of convergence of the iterative method. This combination of methods has been proven successful especially when the matrix A of the system is an M-matrix. The purpose of this paper is to extend the idea of one to more Gauss elimination steps, consider other classes of matrices A, e.g., p-cyclic consistently ordered, and generalize and improve the asymptotic convergence rates of some of the methods known so far.

  20. Dimeric spectra analysis in Microsoft Excel: a comparative study.

    PubMed

    Gilani, A Ghanadzadeh; Moghadam, M; Zakerhamidi, M S

    2011-11-01

    The purpose of this work is to introduce the reader to an Add-in implementation, Decom. This implementation provides the whole processing requirements for analysis of dimeric spectra. General linear and nonlinear decomposition algorithms were integrated as an Excel Add-in for easy installation and usage. In this work, the results of several samples investigations were compared to those obtained by Datan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. UAV Swarm Tactics: An Agent-Based Simulation and Markov Process Analysis

    DTIC Science & Technology

    2013-06-01

    CRN Common Random Numbers CSV Comma Separated Values DoE Design of Experiment GLM Generalized Linear Model HVT High Value Target JAR Java ARchive JMF... Java Media Framework JRE Java runtime environment Mason Multi-Agent Simulator Of Networks MOE Measure Of Effectiveness MOP Measures Of Performance...with every set several times, and to write a CSV file with the results. Rather than scripting the agent behavior deterministically, the agents should

  2. Quantum stochastic calculus associated with quadratic quantum noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Sinha, Kalyan B., E-mail: kbs-jaya@yahoo.co.in

    2016-02-15

    We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculusmore » extends the Hudson-Parthasarathy quantum stochastic calculus.« less

  3. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  4. Angle-domain inverse scattering migration/inversion in isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  5. The application of information theory for the research of aging and aging-related diseases.

    PubMed

    Blokh, David; Stambler, Ilia

    2017-10-01

    This article reviews the application of information-theoretical analysis, employing measures of entropy and mutual information, for the study of aging and aging-related diseases. The research of aging and aging-related diseases is particularly suitable for the application of information theory methods, as aging processes and related diseases are multi-parametric, with continuous parameters coexisting alongside discrete parameters, and with the relations between the parameters being as a rule non-linear. Information theory provides unique analytical capabilities for the solution of such problems, with unique advantages over common linear biostatistics. Among the age-related diseases, information theory has been used in the study of neurodegenerative diseases (particularly using EEG time series for diagnosis and prediction), cancer (particularly for establishing individual and combined cancer biomarkers), diabetes (mainly utilizing mutual information to characterize the diseased and aging states), and heart disease (mainly for the analysis of heart rate variability). Few works have employed information theory for the analysis of general aging processes and frailty, as underlying determinants and possible early preclinical diagnostic measures for aging-related diseases. Generally, the use of information-theoretical analysis permits not only establishing the (non-linear) correlations between diagnostic or therapeutic parameters of interest, but may also provide a theoretical insight into the nature of aging and related diseases by establishing the measures of variability, adaptation, regulation or homeostasis, within a system of interest. It may be hoped that the increased use of such measures in research may considerably increase diagnostic and therapeutic capabilities and the fundamental theoretical mathematical understanding of aging and disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Unobtrusive Detection of Mild Cognitive Impairment in Older Adults Through Home Monitoring.

    PubMed

    Akl, Ahmad; Snoek, Jasper; Mihailidis, Alex

    2017-03-01

    The early detection of dementias such as Alzheimer's disease can in some cases reverse, stop, or slow cognitive decline and in general greatly reduce the burden of care. This is of increasing significance as demographic studies are warning of an aging population in North America and worldwide. Various smart homes and systems have been developed to detect cognitive decline through continuous monitoring of high risk individuals. However, the majority of these smart homes and systems use a number of predefined heuristics to detect changes in cognition, which has been demonstrated to focus on the idiosyncratic nuances of the individual subjects, and thus, does not generalize. In this paper, we address this problem by building generalized linear models of home activity of older adults monitored using unobtrusive sensing technologies. We use inhomogenous Poisson processes to model the presence of the recruited older adults within different rooms throughout the day. We employ an information theoretic approach to compare the generalized linear models learned, and we observe significant statistical differences between the cognitively intact and impaired older adults. Using a simple thresholding approach, we were able to detect mild cognitive impairment in older adults with an average area under the ROC curve of 0.716 and an average area under the precision-recall curve of 0.706 using activity models estimated over a time window of 12 weeks.

  7. Uncertainty Analysis of the Grazing Flow Impedance Tube

    NASA Technical Reports Server (NTRS)

    Brown, Martha C.; Jones, Michael G.; Watson, Willie R.

    2012-01-01

    This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.

  8. Temperature and neuronal circuit function: compensation, tuning and tolerance.

    PubMed

    Robertson, R Meldrum; Money, Tomas G A

    2012-08-01

    Temperature has widespread and diverse effects on different subcellular components of neuronal circuits making it difficult to predict precisely the overall influence on output. Increases in temperature generally increase the output rate in either an exponential or a linear manner. Circuits with a slow output tend to respond exponentially with relatively high Q(10)s, whereas those with faster outputs tend to respond in a linear fashion with relatively low temperature coefficients. Different attributes of the circuit output can be compensated by virtue of opposing processes with similar temperature coefficients. At the extremes of the temperature range, differences in the temperature coefficients of circuit mechanisms cannot be compensated and the circuit fails, often with a reversible loss of ion homeostasis. Prior experience of temperature extremes activates conserved processes of phenotypic plasticity that tune neuronal circuits to be better able to withstand the effects of temperature and to recover more rapidly from failure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Fractional Diffusion Processes: Probability Distributions and Continuous Time Random Walk

    NASA Astrophysics Data System (ADS)

    Gorenflo, R.; Mainardi, F.

    A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By the space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order alpha in (0,2] and skewness theta (\\verttheta\\vertlemin \\{alpha ,2-alpha \\}), and the first-order time derivative with a Caputo derivative of order beta in (0,1] . The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process. We view it as a generalized diffusion process that we call fractional diffusion process, and present an integral representation of the fundamental solution. A more general approach to anomalous diffusion is however known to be provided by the master equation for a continuous time random walk (CTRW). We show how this equation reduces to our fractional diffusion equation by a properly scaled passage to the limit of compressed waiting times and jump widths. Finally, we describe a method of simulation and display (via graphics) results of a few numerical case studies.

  10. Hyperspectral processing in graphical processing units

    NASA Astrophysics Data System (ADS)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  11. Automatic classification of artifactual ICA-components for artifact removal in EEG signals.

    PubMed

    Winkler, Irene; Haufe, Stefan; Tangermann, Michael

    2011-08-02

    Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.

  12. Multipole analysis in the radiation field for linearized f (R ) gravity with irreducible Cartesian tensors

    NASA Astrophysics Data System (ADS)

    Wu, Bofeng; Huang, Chao-Guang

    2018-04-01

    The 1 /r expansion in the distance to the source is applied to the linearized f (R ) gravity, and its multipole expansion in the radiation field with irreducible Cartesian tensors is presented. Then, the energy, momentum, and angular momentum in the gravitational waves are provided for linearized f (R ) gravity. All of these results have two parts, which are associated with the tensor part and the scalar part in the multipole expansion of linearized f (R ) gravity, respectively. The former is the same as that in General Relativity, and the latter, as the correction to the result in General Relativity, is caused by the massive scalar degree of freedom and plays an important role in distinguishing General Relativity and f (R ) gravity.

  13. Critical time scales for advection-diffusion-reaction processes.

    PubMed

    Ellery, Adam J; Simpson, Matthew J; McCue, Scott W; Baker, Ruth E

    2012-04-01

    The concept of local accumulation time (LAT) was introduced by Berezhkovskii and co-workers to give a finite measure of the time required for the transient solution of a reaction-diffusion equation to approach the steady-state solution [A. M. Berezhkovskii, C. Sample, and S. Y. Shvartsman, Biophys. J. 99, L59 (2010); A. M. Berezhkovskii, C. Sample, and S. Y. Shvartsman, Phys. Rev. E 83, 051906 (2011)]. Such a measure is referred to as a critical time. Here, we show that LAT is, in fact, identical to the concept of mean action time (MAT) that was first introduced by McNabb [A. McNabb and G. C. Wake, IMA J. Appl. Math. 47, 193 (1991)]. Although McNabb's initial argument was motivated by considering the mean particle lifetime (MPLT) for a linear death process, he applied the ideas to study diffusion. We extend the work of these authors by deriving expressions for the MAT for a general one-dimensional linear advection-diffusion-reaction problem. Using a combination of continuum and discrete approaches, we show that MAT and MPLT are equivalent for certain uniform-to-uniform transitions; these results provide a practical interpretation for MAT by directly linking the stochastic microscopic processes to a meaningful macroscopic time scale. We find that for more general transitions, the equivalence between MAT and MPLT does not hold. Unlike other critical time definitions, we show that it is possible to evaluate the MAT without solving the underlying partial differential equation (pde). This makes MAT a simple and attractive quantity for practical situations. Finally, our work explores the accuracy of certain approximations derived using MAT, showing that useful approximations for nonlinear kinetic processes can be obtained, again without treating the governing pde directly.

  14. Optical Processing Techniques For Pseudorandom Sequence Prediction

    NASA Astrophysics Data System (ADS)

    Gustafson, Steven C.

    1983-11-01

    Pseudorandom sequences are series of apparently random numbers generated, for example, by linear or nonlinear feedback shift registers. An important application of these sequences is in spread spectrum communication systems, in which, for example, the transmitted carrier phase is digitally modulated rapidly and pseudorandomly and in which the information to be transmitted is incorporated as a slow modulation in the pseudorandom sequence. In this case the transmitted information can be extracted only by a receiver that uses for demodulation the same pseudorandom sequence used by the transmitter, and thus this type of communication system has a very high immunity to third-party interference. However, if a third party can predict in real time the probable future course of the transmitted pseudorandom sequence given past samples of this sequence, then interference immunity can be significantly reduced.. In this application effective pseudorandom sequence prediction techniques should be (1) applicable in real time to rapid (e.g., megahertz) sequence generation rates, (2) applicable to both linear and nonlinear pseudorandom sequence generation processes, and (3) applicable to error-prone past sequence samples of limited number and continuity. Certain optical processing techniques that may meet these requirements are discussed in this paper. In particular, techniques based on incoherent optical processors that perform general linear transforms or (more specifically) matrix-vector multiplications are considered. Computer simulation examples are presented which indicate that significant prediction accuracy can be obtained using these transforms for simple pseudorandom sequences. However, the useful prediction of more complex pseudorandom sequences will probably require the application of more sophisticated optical processing techniques.

  15. Generalized Bezout's Theorem and its applications in coding theory

    NASA Technical Reports Server (NTRS)

    Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.

  16. A dimensional approach to determine common and specific neurofunctional markers for depression and social anxiety during emotional face processing.

    PubMed

    Luo, Lizhu; Becker, Benjamin; Zheng, Xiaoxiao; Zhao, Zhiying; Xu, Xiaolei; Zhou, Feng; Wang, Jiaojian; Kou, Juan; Dai, Jing; Kendrick, Keith M

    2018-02-01

    Major depression disorder (MDD) and anxiety disorder are both prevalent and debilitating. High rates of comorbidity between MDD and social anxiety disorder (SAD) suggest common pathological pathways, including aberrant neural processing of interpersonal signals. In patient populations, the determination of common and distinct neurofunctional markers of MDD and SAD is often hampered by confounding factors, such as generally elevated anxiety levels and disorder-specific brain structural alterations. This study employed a dimensional disorder approach to map neurofunctional markers associated with levels of depression and social anxiety symptoms in a cohort of 91 healthy subjects using an emotional face processing paradigm. Examining linear associations between levels of depression and social anxiety, while controlling for trait anxiety revealed that both were associated with exaggerated dorsal striatal reactivity to fearful and sad expression faces respectively. Exploratory analysis revealed that depression scores were positively correlated with dorsal striatal functional connectivity during processing of fearful faces, whereas those of social anxiety showed a negative association during processing of sad faces. No linear relationships between levels of depression and social anxiety were observed during a facial-identity matching task or with brain structure. Together, the present findings indicate that dorsal striatal neurofunctional alterations might underlie aberrant interpersonal processing associated with both increased levels of depression and social anxiety. © 2017 Wiley Periodicals, Inc.

  17. Transverse Cascade and Sustenance of Turbulence in Keplerian Disks with an Azimuthal Magnetic Field

    NASA Astrophysics Data System (ADS)

    Gogichaishvili, D.; Mamatsashvili, G.; Horton, W.; Chagelishvili, G.; Bodo, G.

    2017-10-01

    The magnetorotational instability (MRI) in the sheared rotational Keplerian explains fundamental problems for both astrophysics and toroidal laboratory plasmas. The turbulence occurs before the threshold for the linear eigen modes. The work shows the turbulence occurs in nonzero toroidal magnetic field with a sheared toroidal flow velocity. We analyze the turbulence in Fourier k-space and x-space each time step to clarify the nonlinear energy-momentum transfers that produce the sustenance in the linearly stable plasma. The nonlinear process is a type 3D angular redistribution of modes in Fourier space - a transverse cascade - rather than the direct/inverse cascades. The turbulence is sustained an interplay of the linear transient growth from the radial gradient of the toroidal velocity (which is the only energy supply for the turbulence) and the transverse cascade. There is a relatively small ``vital area in Fourier space'' is crucial for the sustenance. Outside the vital area the direct cascade dominates. The interplay of the linear and nonlinear processes is generally too intertwined in k-space for a classical turbulence characterization. Subcycles occur from the interactions that maintain self-organization nonlinear turbulence. The spectral characteristics in four simulations are similar showing the universality of the sustenance mechanism of the shear flow driven MHDs-turbulence. Funded by the US Department of Energy under Grant DE-FG02-04ER54742 and the Space and Geophysics Laboratory at the University of Texas at Austin. G. Mamatsashvili is supported by the Alexander von Humboldt Foundation, Germany.

  18. Adapting generalization tools to physiographic diversity for the united states national hydrography dataset

    USGS Publications Warehouse

    Buttenfield, B.P.; Stanislawski, L.V.; Brewer, C.A.

    2011-01-01

    This paper reports on generalization and data modeling to create reduced scale versions of the National Hydrographic Dataset (NHD) for dissemination through The National Map, the primary data delivery portal for USGS. Our approach distinguishes local differences in physiographic factors, to demonstrate that knowledge about varying terrain (mountainous, hilly or flat) and varying climate (dry or humid) can support decisions about algorithms, parameters, and processing sequences to create generalized, smaller scale data versions which preserve distinct hydrographic patterns in these regions. We work with multiple subbasins of the NHD that provide a range of terrain and climate characteristics. Specifically tailored generalization sequences are used to create simplified versions of the high resolution data, which was compiled for 1:24,000 scale mapping. Results are evaluated cartographically and metrically against a medium resolution benchmark version compiled for 1:100,000, developing coefficients of linear and areal correspondence.

  19. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    PubMed

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Performance analysis of a generalized upset detection procedure

    NASA Technical Reports Server (NTRS)

    Blough, Douglas M.; Masson, Gerald M.

    1987-01-01

    A general procedure for upset detection in complex systems, called the data block capture and analysis upset monitoring process is described and analyzed. The process consists of repeatedly recording a fixed amount of data from a set of predetermined observation lines of the system being monitored (i.e., capturing a block of data), and then analyzing the captured block in an attempt to determine whether the system is functioning correctly. The algorithm which analyzes the data blocks can be characterized in terms of the amount of time it requires to examine a given length data block to ascertain the existence of features/conditions that have been predetermined to characterize the upset-free behavior of the system. The performance of linear, quadratic, and logarithmic data analysis algorithms is rigorously characterized in terms of three performance measures: (1) the probability of correctly detecting an upset; (2) the expected number of false alarms; and (3) the expected latency in detecting upsets.

  1. Charge loss (or the lack thereof) for AdS black holes

    NASA Astrophysics Data System (ADS)

    Ong, Yen Chin; Chen, Pisin

    2014-06-01

    The evolution of evaporating charged black holes is complicated to model in general, but is nevertheless important since the hints to the Information Loss Paradox and its recent firewall incarnation may lie in understanding more generic geometries than that of Schwarzschild spacetime. Fortunately, for sufficiently large asymptotically flat Reissner-Nordström black holes, the evaporation process can be modeled via a system of coupled linear ordinary differential equations, with charge loss rate governed by Schwinger pair-production process. The same model can be generalized to study the evaporation of AdS Reissner-Nordström black holes with flat horizon. It was recently found that such black holes always evolve towards extremality since charge loss is inefficient. This property is completely opposite to the asymptotically flat case in which the black hole eventually loses its charges and tends towards Schwarzschild limit. We clarify the underlying reason for this different behavior.

  2. Responsive linear-dendritic block copolymers.

    PubMed

    Blasco, Eva; Piñol, Milagros; Oriol, Luis

    2014-06-01

    The combination of dendritic and linear polymeric structures in the same macromolecule opens up new possibilities for the design of block copolymers and for applications of functional polymers that have self-assembly properties. There are three main strategies for the synthesis of linear-dendritic block copolymers (LDBCs) and, in particular, the emergence of click chemistry has made the coupling of preformed blocks one of the most efficient ways of obtaining libraries of LDBCs. In these materials, the periphery of the dendron can be precisely functionalised to obtain functional LDBCs with self-assembly properties of interest in different technological areas. The incorporation of stimuli-responsive moieties gives rise to smart materials that are generally processed as self-assemblies of amphiphilic LDBCs with a morphology that can be controlled by an external stimulus. Particular emphasis is placed on light-responsive LDBCs. Furthermore, a brief review of the biomedical or materials science applications of LDBCs is presented. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  4. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  5. A Few New 2+1-Dimensional Nonlinear Dynamics and the Representation of Riemann Curvature Tensors

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Zhang, Yufeng; Zhang, Xiangzhi

    2016-09-01

    We first introduced a linear stationary equation with a quadratic operator in ∂x and ∂y, then a linear evolution equation is given by N-order polynomials of eigenfunctions. As applications, by taking N=2, we derived a (2+1)-dimensional generalized linear heat equation with two constant parameters associative with a symmetric space. When taking N=3, a pair of generalized Kadomtsev-Petviashvili equations with the same eigenvalues with the case of N=2 are generated. Similarly, a second-order flow associative with a homogeneous space is derived from the integrability condition of the two linear equations, which is a (2+1)-dimensional hyperbolic equation. When N=3, the third second flow associative with the homogeneous space is generated, which is a pair of new generalized Kadomtsev-Petviashvili equations. Finally, as an application of a Hermitian symmetric space, we established a pair of spectral problems to obtain a new (2+1)-dimensional generalized Schrödinger equation, which is expressed by the Riemann curvature tensors.

  6. A quasi-chemical model for the growth and death of microorganisms in foods by non-thermal and high-pressure processing.

    PubMed

    Doona, Christopher J; Feeherry, Florence E; Ross, Edward W

    2005-04-15

    Predictive microbial models generally rely on the growth of bacteria in laboratory broth to approximate the microbial growth kinetics expected to take place in actual foods under identical environmental conditions. Sigmoidal functions such as the Gompertz or logistics equation accurately model the typical microbial growth curve from the lag to the stationary phase and provide the mathematical basis for estimating parameters such as the maximum growth rate (MGR). Stationary phase data can begin to show a decline and make it difficult to discern which data to include in the analysis of the growth curve, a factor that influences the calculated values of the growth parameters. In contradistinction, the quasi-chemical kinetics model provides additional capabilities in microbial modelling and fits growth-death kinetics (all four phases of the microbial lifecycle continuously) for a general set of microorganisms in a variety of actual food substrates. The quasi-chemical model is differential equations (ODEs) that derives from a hypothetical four-step chemical mechanism involving an antagonistic metabolite (quorum sensing) and successfully fits the kinetics of pathogens (Staphylococcus aureus, Escherichia coli and Listeria monocytogenes) in various foods (bread, turkey meat, ham and cheese) as functions of different hurdles (a(w), pH, temperature and anti-microbial lactate). The calculated value of the MGR depends on whether growth-death data or only growth data are used in the fitting procedure. The quasi-chemical kinetics model is also exploited for use with the novel food processing technology of high-pressure processing. The high-pressure inactivation kinetics of E. coli are explored in a model food system over the pressure (P) range of 207-345 MPa (30,000-50,000 psi) and the temperature (T) range of 30-50 degrees C. In relatively low combinations of P and T, the inactivation curves are non-linear and exhibit a shoulder prior to a more rapid rate of microbial destruction. In the higher P, T regime, the inactivation plots tend to be linear. In all cases, the quasi-chemical model successfully fit the linear and curvi-linear inactivation plots for E. coli in model food systems. The experimental data and the quasi-chemical mathematical model described herein are candidates for inclusion in ComBase, the developing database that combines data and models from the USDA Pathogen Modeling Program and the UK Food MicroModel.

  7. Next Linear Collider Home Page

    Science.gov Websites

    Welcome to the Next Linear Collider NLC Home Page If you would like to learn about linear colliders in general and about this next-generation linear collider project's mission, design ideas, and Linear Collider. line | NLC Home | NLC Technical | SLAC | mcdunn Tuesday, February 14, 2006 01:32:11 PM

  8. Control of Distributed Parameter Systems

    DTIC Science & Technology

    1990-08-01

    vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of

  9. Aztec user`s guide. Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1995-10-01

    Aztec is an iterative library that greatly simplifies the parallelization process when solving the linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. Aztec is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparsemore » unstructured matrices for parallel solution. Once the distributed matrix is created, computation can be performed on any of the parallel machines running Aztec: nCUBE 2, IBM SP2 and Intel Paragon, MPI platforms as well as standard serial and vector platforms. Aztec includes a number of Krylov iterative methods such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BICGSTAB) to solve systems of equations. These Krylov methods are used in conjunction with various preconditioners such as polynomial or domain decomposition methods using LU or incomplete LU factorizations within subdomains. Although the matrix A can be general, the package has been designed for matrices arising from the approximation of partial differential equations (PDEs). In particular, the Aztec package is oriented toward systems arising from PDE applications.« less

  10. Linearized finite-element method solution of the ion-exchange nonlinear diffusion model

    NASA Astrophysics Data System (ADS)

    Badr, Mohamed M.; Swillam, Mohamed A.

    2017-04-01

    Ion-exchange process is one of the most common techniques used in glass waveguide fabrication. This has many advantages, such as low cost, ease of implementation, and simple equipment requirements. The technology is based on the substitution of some of the host ions in the glass (typically Na+) with other ions that possess different characteristics in terms of size and polarizability. The newly diffused ions produce a region with a relatively higher refractive index in which the light could be guided. A critical issue arises when it comes to designing such waveguides, which is carefully and precisely determining the resultant index profile. This task has been proven to be hideous as the process is generally governed by a nonlinear diffusion model with no direct general analytical solution. Furthermore, numerical solutions become unreliable-in terms of stability and mean squared error-in some cases, especially the K+-Na+ ion-exchanged waveguide, which is the best candidate to produce waveguides with refractive index differences compatible with those of the commercially available optical fibers. Linearized finite-element method formulations were used to provide a reliable tool that could solve the nonlinear diffusion model of the ion-exchange in both one- and two-dimensional spaces. Additionally, the annealed channel waveguide case has been studied. In all cases, unprecedented stability and minimum mean squared error could be achieved.

  11. Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2008-05-01

    In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.

  12. NONLINEAR REFLECTION PROCESS OF LINEARLY POLARIZED, BROADBAND ALFVÉN WAVES IN THE FAST SOLAR WIND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoda, M.; Yokoyama, T., E-mail: shoda@eps.s.u-tokyo.ac.jp

    2016-04-01

    Using one-dimensional numerical simulations, we study the elementary process of Alfvén wave reflection in a uniform medium, including nonlinear effects. In the linear regime, Alfvén wave reflection is triggered only by the inhomogeneity of the medium, whereas in the nonlinear regime, it can occur via nonlinear wave–wave interactions. Such nonlinear reflection (backscattering) is typified by decay instability. In most studies of decay instabilities, the initial condition has been a circularly polarized Alfvén wave. In this study we consider a linearly polarized Alfvén wave, which drives density fluctuations by its magnetic pressure force. For generality, we also assume a broadband wavemore » with a red-noise spectrum. In the data analysis, we decompose the fluctuations into characteristic variables using local eigenvectors, thus revealing the behaviors of the individual modes. Different from the circular-polarization case, we find that the wave steepening produces a new energy channel from the parent Alfvén wave to the backscattered one. Such nonlinear reflection explains the observed increasing energy ratio of the sunward to the anti-sunward Alfvénic fluctuations in the solar wind with distance against the dynamical alignment effect.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendelsohn, M.; Lowder, T.; Canavan, B.

    Over the last several years, solar energy technologies have been, or are in the process of being, deployed at unprecedented levels. A critical recent development, resulting from the massive scale of projects in progress or recently completed, is having the power sold directly to electric utilities. Such 'utility-scale' systems offer the opportunity to deploy solar technologies far faster than the traditional 'behind-the-meter' projects designed to offset retail load. Moreover, these systems have employed significant economies of scale during construction and operation, attracting financial capital, which in turn can reduce the delivered cost of power. This report is a summary ofmore » the current U.S. utility-scale solar state-of-the-market and development pipeline. Utility-scale solar energy systems are generally categorized as one of two basic designs: concentrating solar power (CSP) and photovoltaic (PV). CSP systems can be further delineated into four commercially available technologies: parabolic trough, central receiver (CR), parabolic dish, and linear Fresnel reflector. CSP systems can also be categorized as hybrid, which combine a solar-based system (generally parabolic trough, CR, or linear Fresnel) and a fossil fuel energy system to produce electric power or steam.« less

  14. The Mechanisms of Aberrant Protein Aggregation

    NASA Astrophysics Data System (ADS)

    Cohen, Samuel; Vendruscolo, Michele; Dobson, Chris; Knowles, Tuomas

    2012-02-01

    We discuss the development of a kinetic theory for understanding the aberrant loss of solubility of proteins. The failure to maintain protein solubility results often in the assembly of organized linear structures, commonly known as amyloid fibrils, the formation of which is associated with over 50 clinical disorders including Alzheimer's and Parkinson's diseases. A true microscopic understanding of the mechanisms that drive these aggregation processes has proved difficult to achieve. To address this challenge, we apply the methodologies of chemical kinetics to the biomolecular self-assembly pathways related to protein aggregation. We discuss the relevant master equation and analytical approaches to studying it. In particular, we derive the underlying rate laws in closed-form using a self-consistent solution scheme; the solutions that we obtain reveal scaling behaviors that are very generally present in systems of growing linear aggregates, and, moreover, provide a general route through which to relate experimental measurements to mechanistic information. We conclude by outlining a study of the aggregation of the Alzheimer's amyloid-beta peptide. The study identifies the dominant microscopic mechanism of aggregation and reveals previously unidentified therapeutic strategies.

  15. Models for the propensity score that contemplate the positivity assumption and their application to missing data and causality.

    PubMed

    Molina, J; Sued, M; Valdora, M

    2018-06-05

    Generalized linear models are often assumed to fit propensity scores, which are used to compute inverse probability weighted (IPW) estimators. To derive the asymptotic properties of IPW estimators, the propensity score is supposed to be bounded away from zero. This condition is known in the literature as strict positivity (or positivity assumption), and, in practice, when it does not hold, IPW estimators are very unstable and have a large variability. Although strict positivity is often assumed, it is not upheld when some of the covariates are unbounded. In real data sets, a data-generating process that violates the positivity assumption may lead to wrong inference because of the inaccuracy in the estimations. In this work, we attempt to conciliate between the strict positivity condition and the theory of generalized linear models by incorporating an extra parameter, which results in an explicit lower bound for the propensity score. An additional parameter is added to fulfil the overlap assumption in the causal framework. Copyright © 2018 John Wiley & Sons, Ltd.

  16. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research.

    PubMed

    Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2012-08-01

    An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Equivalence of linear canonical transform domains to fractional Fourier domains and the bicanonical width product: a generalization of the space-bandwidth product.

    PubMed

    Oktem, Figen S; Ozaktas, Haldun M

    2010-08-01

    Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.

  18. The structural response of unsymmetrically laminated composite cylinders

    NASA Technical Reports Server (NTRS)

    Butler, T. A.; Hyer, M. W.

    1989-01-01

    The responses of an unsymmetrically laminated fiber-reinforced composite cylinder to an axial compressive load, a torsional load, and the temperature change associated with cooling from the processing temperature to the service temperature are investigated. These problems are considered axisymmetric and the response is studied in the context of linear elastic material behavior and geometrically linear kinematics. Four different laminates are studied: a general unsymmetric laminate; two unsymmetric but more conventional laminates; and a conventional quasi-isotropic symmetric laminate. The responses based on closed-form solutions for different boundary conditions are computed and studied in detail. Particular emphasis is directed at understanding the influence of elastic couplings in the laminates. The influence of coupling decreased from a large effect in the general unsymmetric laminate, to practically no effect in the quasi-isotropic laminate. For example, the torsional loading of the general unsymmetric laminate resulted in a radial displacement. The temperature change also caused a significant radial displacement to occur near the ends of the cylinder. On the other hand, the more conventional unsymmetric laminate and the quasi-isotropic cylinder did not deform radially when subjected to a torsional load. From the results obtained, it is clear the degree of elastic coupling can be controlled and indeed designed into a cylinder, the degree and character of the coupling being dictated by the application.

  19. Consistent linearization of the element-independent corotational formulation for the structural analysis of general shells

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.

    1988-01-01

    A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.

  20. Generalized prolate spheroidal wave functions for optical finite fractional Fourier and linear canonical transforms.

    PubMed

    Pei, Soo-Chang; Ding, Jian-Jiun

    2005-03-01

    Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.

  1. Validating the applicability of the GUM procedure

    NASA Astrophysics Data System (ADS)

    Cox, Maurice G.; Harris, Peter M.

    2014-08-01

    This paper is directed at practitioners seeking a degree of assurance in the quality of the results of an uncertainty evaluation when using the procedure in the Guide to the Expression of Uncertainty in Measurement (GUM) (JCGM 100 : 2008). Such assurance is required in adhering to general standards such as International Standard ISO/IEC 17025 or other sector-specific standards. We investigate the extent to which such assurance can be given. For many practical cases, a measurement result incorporating an evaluated uncertainty that is correct to one significant decimal digit would be acceptable. Any quantification of the numerical precision of an uncertainty statement is naturally relative to the adequacy of the measurement model and the knowledge used of the quantities in that model. For general univariate and multivariate measurement models, we emphasize the use of a Monte Carlo method, as recommended in GUM Supplements 1 and 2. One use of this method is as a benchmark in terms of which measurement results provided by the GUM can be assessed in any particular instance. We mainly consider measurement models that are linear in the input quantities, or have been linearized and the linearization process is deemed to be adequate. When the probability distributions for those quantities are independent, we indicate the use of other approaches such as convolution methods based on the fast Fourier transform and, particularly, Chebyshev polynomials as benchmarks.

  2. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  3. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  4. On Generalizations of Cochran’s Theorem and Projection Matrices.

    DTIC Science & Technology

    1980-08-01

    Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed

  5. General rigid motion correction for computed tomography imaging based on locally linear embedding

    NASA Astrophysics Data System (ADS)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  6. ISPAN (Interactive Stiffened Panel Analysis): A tool for quick concept evaluation and design trade studies

    NASA Technical Reports Server (NTRS)

    Hairr, John W.; Dorris, William J.; Ingram, J. Edward; Shah, Bharat M.

    1993-01-01

    Interactive Stiffened Panel Analysis (ISPAN) modules, written in FORTRAN, were developed to provide an easy to use tool for creating finite element models of composite material stiffened panels. The modules allow the user to interactively construct, solve and post-process finite element models of four general types of structural panel configurations using only the panel dimensions and properties as input data. Linear, buckling and post-buckling solution capability is provided. This interactive input allows rapid model generation and solution by non finite element users. The results of a parametric study of a blade stiffened panel are presented to demonstrate the usefulness of the ISPAN modules. Also, a non-linear analysis of a test panel was conducted and the results compared to measured data and previous correlation analysis.

  7. Smooth function approximation using neural networks.

    PubMed

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  8. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  9. Proposal of digital interface for the system of the air conditioner's remote control: analysis of the system of feedback.

    PubMed

    da Silva de Queiroz Pierre, Raisa; Kawada, Tarô Arthur Tavares; Fontes, André Guimarães

    2012-01-01

    Develop a proposal of digital interface for the system of the remote control, that functions as support system during the manipulation of air conditioner adjusted for the users in general, from ergonomic parameters, objectifying the reduction of the problems faced for the user and improving the process. 20 people with questionnaire with both qualitative and quantitative level. Linear Method consists of a sequence of steps in which the input of one of them depends on the output from the previous one, although they are independent. The process of feedback, when necessary, must occur within each step separately.

  10. Genten: Software for Generalized Tensor Decompositions v. 1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phipps, Eric T.; Kolda, Tamara G.; Dunlavy, Daniel

    Tensors, or multidimensional arrays, are a powerful mathematical means of describing multiway data. This software provides computational means for decomposing or approximating a given tensor in terms of smaller tensors of lower dimension, focusing on decomposition of large, sparse tensors. These techniques have applications in many scientific areas, including signal processing, linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience and more. The software is designed to take advantage of parallelism present emerging computer architectures such has multi-core CPUs, many-core accelerators such as the Intel Xeon Phi, and computation-oriented GPUs to enable efficient processing of large tensors.

  11. Correntropy-based partial directed coherence for testing multivariate Granger causality in nonlinear processes

    NASA Astrophysics Data System (ADS)

    Kannan, Rohit; Tangirala, Arun K.

    2014-06-01

    Identification of directional influences in multivariate systems is of prime importance in several applications of engineering and sciences such as plant topology reconstruction, fault detection and diagnosis, and neurosciences. A spectrum of related directionality measures, ranging from linear measures such as partial directed coherence (PDC) to nonlinear measures such as transfer entropy, have emerged over the past two decades. The PDC-based technique is simple and effective, but being a linear directionality measure has limited applicability. On the other hand, transfer entropy, despite being a robust nonlinear measure, is computationally intensive and practically implementable only for bivariate processes. The objective of this work is to develop a nonlinear directionality measure, termed as KPDC, that possesses the simplicity of PDC but is still applicable to nonlinear processes. The technique is founded on a nonlinear measure called correntropy, a recently proposed generalized correlation measure. The proposed method is equivalent to constructing PDC in a kernel space where the PDC is estimated using a vector autoregressive model built on correntropy. A consistent estimator of the KPDC is developed and important theoretical results are established. A permutation scheme combined with the sequential Bonferroni procedure is proposed for testing hypothesis on absence of causality. It is demonstrated through several case studies that the proposed methodology effectively detects Granger causality in nonlinear processes.

  12. Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.

    PubMed

    Trninić, Marko; Jeličić, Mario; Papić, Vladan

    2015-07-01

    In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.

  13. Optical soliton solutions of the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term

    NASA Astrophysics Data System (ADS)

    Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman

    2018-07-01

    A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.

  14. A Logical Process Calculus

    NASA Technical Reports Server (NTRS)

    Cleaveland, Rance; Luettgen, Gerald; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents the Logical Process Calculus (LPC), a formalism that supports heterogeneous system specifications containing both operational and declarative subspecifications. Syntactically, LPC extends Milner's Calculus of Communicating Systems with operators from the alternation-free linear-time mu-calculus (LT(mu)). Semantically, LPC is equipped with a behavioral preorder that generalizes Hennessy's and DeNicola's must-testing preorder as well as LT(mu's) satisfaction relation, while being compositional for all LPC operators. From a technical point of view, the new calculus is distinguished by the inclusion of: (1) both minimal and maximal fixed-point operators and (2) an unimple-mentability predicate on process terms, which tags inconsistent specifications. The utility of LPC is demonstrated by means of an example highlighting the benefits of heterogeneous system specification.

  15. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  16. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  17. Conical scan impact study. Volume 1: General central data processing facility. [multispectral band scanner design alternatives for earth resources data

    NASA Technical Reports Server (NTRS)

    Ebert, D. H.; Eppes, T. A.; Thomas, D. J.

    1973-01-01

    The impact of a conical scan versus a linear scan multispectral scanner (MSS) instrument was studied in terms of: (1) design modifications required in framing and continuous image recording devices; and (2) changes in configurations of an all-digital precision image processor. A baseline system was defined to provide the framework for comparison, and included pertinent spacecraft parameters, a conical MSS, a linear MSS, an image recording system, and an all-digital precision processor. Lateral offset pointing of the sensors over a range of plus or minus 20 deg was considered. The study addressed the conical scan impact on geometric, radiometric, and aperture correction of MSS data in terms of hardware and software considerations, system complexity, quality of corrections, throughput, and cost of implementation. It was concluded that: (1) if the MSS data are to be only film recorded, then there is only a nomial concial scan impact on the ground data processing system; and (2) if digital data are to be provided to users on computer compatible tapes in rectilinear format, then there is a significant conical scan impact on the ground data processing system.

  18. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    PubMed

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  19. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  20. An Analysis of Terrestrial and Aquatic Environmental Controls of Riverine Dissolved Organic Carbon in the Conterminous United States

    DOE PAGES

    Yang, Qichun; Zhang, Xuesong; Xu, Xingya; ...

    2017-05-29

    Riverine carbon cycling is an important, but insufficiently investigated component of the global carbon cycle. Analyses of environmental controls on riverine carbon cycling are critical for improved understanding of mechanisms regulating carbon processing and storage along the terrestrial-aquatic continuum. Here, we compile and analyze riverine dissolved organic carbon (DOC) concentration data from 1402 United States Geological Survey (USGS) gauge stations to examine the spatial variability and environmental controls of DOC concentrations in the United States (U.S.) surface waters. DOC concentrations exhibit high spatial variability, with an average of 6.42 ± 6.47 mg C/ L (Mean ± Standard Deviation). In general,more » high DOC concentrations occur in the Upper Mississippi River basin and the Southeastern U.S., while low concentrations are mainly distributed in the Western U.S. Single-factor analysis indicates that slope of drainage areas, wetlands, forests, percentage of first-order streams, and instream nutrients (such as nitrogen and phosphorus) pronouncedly influence DOC concentrations, but the explanatory power of each bivariate model is lower than 35%. Analyses based on the general multi-linear regression models suggest DOC concentrations are jointly impacted by multiple factors. Soil properties mainly show positive correlations with DOC concentrations; forest and shrub lands have positive correlations with DOC concentrations, but urban area and croplands demonstrate negative impacts; total instream phosphorus and dam density correlate positively with DOC concentrations. Notably, the relative importance of these environmental controls varies substantially across major U.S. water resource regions. In addition, DOC concentrations and environmental controls also show significant variability from small streams to large rivers, which may be caused by changing carbon sources and removal rates by river orders. In sum, our results reveal that general multi-linear regression analysis of twenty one terrestrial and aquatic environmental factors can partially explain (56%) the DOC concentration variation. In conclusion, this study highlights the complexity of the interactions among these environmental factors in determining DOC concentrations, thus calls for processes-based, non-linear methodologies to constrain uncertainties in riverine DOC cycling.« less

  1. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  2. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses.

    PubMed

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  3. Visuo‐manual tracking: does intermittent control with aperiodic sampling explain linear power and non‐linear remnant without sensorimotor noise?

    PubMed Central

    Gawthrop, Peter J.; Lakie, Martin; Loram, Ian D.

    2017-01-01

    Key points A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non‐linearly related to the input, attributed to sensorimotor noise.Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200–500 ms periods of irresponsiveness to sensory input making the control process intrinsically non‐linear.This evidence calls for re‐examination of the extent to which random sensorimotor noise is required to explain the non‐linear remnant.This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds.Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. Abstract The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non‐linear remnant resulting from random sensorimotor noise from multiple sources, and non‐linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non‐linear remnant using noise or non‐linear transformations? (ii) Can non‐linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed ways) manually controlled two systems (1st and 2nd order) subject to a periodic multi‐sine disturbance. Joystick power was analysed using three models, continuous‐linear‐control (CC), continuous‐linear‐control with calculated noise spectrum (CCN), and intermittent control with aperiodic sampling triggered by prediction error thresholds (IC). Unlike the linear mechanism, the intermittent control mechanism explained the majority of total power (linear and remnant) (77–87% vs. 8–48%, IC vs. CC). Between conditions, IC used thresholds and distributions of open loop intervals consistent with, respectively, instructions and previous measured, model independent values; whereas CCN required changes in noise spectrum deviating from broadband, signal dependent noise. We conclude that manual tracking uses open loop predictive control with aperiodic sampling. Because aperiodic sampling is inherent to serial decision making within previously identified, specific frontal, striatal and parietal networks we suggest that these structures are intimately involved in visuo‐manual tracking. PMID:28833126

  4. Generalized Nonlinear Yule Models

    NASA Astrophysics Data System (ADS)

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-11-01

    With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.

  5. Meteorological influences on the interannual variability of meningitis incidence in northwest Nigeria.

    NASA Astrophysics Data System (ADS)

    Abdussalam, Auwal; Monaghan, Andrew; Dukic, Vanja; Hayden, Mary; Hopson, Thomas; Leckebusch, Gregor

    2013-04-01

    Northwest Nigeria is a region with high risk of bacterial meningitis. Since the first documented epidemic of meningitis in Nigeria in 1905, the disease has been endemic in the northern part of the country, with epidemics occurring regularly. In this study we examine the influence of climate on the interannual variability of meningitis incidence and epidemics. Monthly aggregate counts of clinically confirmed hospital-reported cases of meningitis were collected in northwest Nigeria for the 22-year period spanning 1990-2011. Several generalized linear statistical models were fit to the monthly meningitis counts, including generalized additive models. Explanatory variables included monthly records of temperatures, humidity, rainfall, wind speed, sunshine and dustiness from weather stations nearest to the hospitals, and a time series of polysaccharide vaccination efficacy. The effects of other confounding factors -- i.e., mainly non-climatic factors for which records were not available -- were estimated as a smooth, monthly-varying function of time in the generalized additive models. Results reveal that the most important explanatory climatic variables are mean maximum monthly temperature, relative humidity and dustiness. Accounting for confounding factors (e.g., social processes) in the generalized additive models explains more of the year-to-year variation of meningococcal disease compared to those generalized linear models that do not account for such factors. Promising results from several models that included only explanatory variables that preceded the meningitis case data by 1-month suggest there may be potential for prediction of meningitis in northwest Nigeria to aid decision makers on this time scale.

  6. The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models

    DTIC Science & Technology

    1988-07-27

    auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the

  7. A Generalization Strategy for Discrete Area Feature by Using Stroke Grouping and Polarization Transportation Selection

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Burghardt, Dirk

    2018-05-01

    This paper presents a new strategy for the generalization of discrete area features by using stroke grouping method and polarization transportation selection. The mentioned stroke is constructed on derive of the refined proximity graph of area features, and the refinement is under the control of four constraints to meet different grouping requirements. The area features which belong to the same stroke are detected into the same group. The stroke-based strategy decomposes the generalization process into two sub-processes by judging whether the area features related to strokes or not. For the area features which belong to the same one stroke, they normally present a linear like pat-tern, and in order to preserve this kind of pattern, typification is chosen as the operator to implement the generalization work. For the remaining area features which are not related by strokes, they are still distributed randomly and discretely, and the selection is chosen to conduct the generalization operation. For the purpose of retaining their original distribution characteristic, a Polarization Transportation (PT) method is introduced to implement the selection operation. Buildings and lakes are selected as the representatives of artificial area feature and natural area feature respectively to take the experiments. The generalized results indicate that by adopting this proposed strategy, the original distribution characteristics of building and lake data can be preserved, and the visual perception is pre-served as before.

  8. Working memory span in mild cognitive impairment. Influence of processing speed and cognitive reserve.

    PubMed

    Facal, David; Juncos-Rabadán, Onésimo; Pereiro, Arturo X; Lojo-Seoane, Cristina

    2014-04-01

    Mild cognitive impairment (MCI) often includes episodic memory impairment, but can also involve other types of cognitive decline. Although previous studies have shown poorer performance of MCI patients in working memory (WM) span tasks, different MCI subgroups were not studied. In the present exploratory study, 145 participants underwent extensive cognitive evaluation, which included three different WM span tasks, and were classified into the following groups: multiple-domain amnestic MCI (mda-MCI), single-domain amnestic MCI (sda-MCI), and controls. General linear model was conducted by considering the WM span tasks as the within-subject factor; the group (mda-MCI, sda-MCI, and controls) as the inter-subject factor; and processing speed, vocabulary and age as covariates. Multiple linear regression models were also used to test the influence of processing speed, vocabulary, and other cognitive reserve (CR) proxies. Results indicate different levels of impairment of WM, with more severe impairment in mda-MCI patients. The differences were still present when processing resources and CR were controlled. Between-group differences can be understood as a manifestation of the greater severity and widespread memory impairment in mda-MCI patients and may contribute to a better understanding of continuum from normal controls to mda-MCI patients. Processing speed and CR have a limited influence on WM scores, reducing but not removing differences between groups.

  9. Reliable probabilities through statistical post-processing of ensemble predictions

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2013-04-01

    We develop post-processing or calibration approaches based on linear regression that make ensemble forecasts more reliable. We enforce climatological reliability in the sense that the total variability of the prediction is equal to the variability of the observations. Second, we impose ensemble reliability such that the spread around the ensemble mean of the observation coincides with the one of the ensemble members. In general the attractors of the model and reality are inhomogeneous. Therefore ensemble spread displays a variability not taken into account in standard post-processing methods. We overcome this by weighting the ensemble by a variable error. The approaches are tested in the context of the Lorenz 96 model (Lorenz 1996). The forecasts become more reliable at short lead times as reflected by a flatter rank histogram. Our best method turns out to be superior to well-established methods like EVMOS (Van Schaeybroeck and Vannitsem, 2011) and Nonhomogeneous Gaussian Regression (Gneiting et al., 2005). References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Lorenz, E. N., 1996: Predictability - a problem partly solved. Proceedings, Seminar on Predictability ECMWF. 1, 1-18. [3] Van Schaeybroeck, B., and S. Vannitsem, 2011: Post-processing through linear regression, Nonlin. Processes Geophys., 18, 147.

  10. Cosmic non-TEM radiation and synthetic feed array sensor system in ASIC mixed signal technology

    NASA Astrophysics Data System (ADS)

    Centureli, F.; Scotti, G.; Tommasino, P.; Trifiletti, A.; Romano, F.; Cimmino, R.; Saitto, A.

    2014-08-01

    The paper deals with the opportunity to introduce "Not strictly TEM waves" Synthetic detection Method (NTSM), consisting in a Three Axis Digital Beam Processing (3ADBP), to enhance the performances of radio telescope and sensor systems. Current Radio Telescopes generally use the classic 3D "TEM waves" approximation Detection Method, which consists in a linear tomography process (Single or Dual axis beam forming processing) neglecting the small z component. The Synthetic FEED ARRAY three axis Sensor SYSTEM is an innovative technique using a synthetic detection of the generic "NOT strictly TEM Waves radiation coming from the Cosmo, which processes longitudinal component of Angular Momentum too. Than the simultaneous extraction from radiation of both the linear and quadratic information component, may reduce the complexity to reconstruct the Early Universe in the different requested scales. This next order approximation detection of the observed cosmologic processes, may improve the efficacy of the statistical numerical model used to elaborate the same information acquired. The present work focuses on detection of such waves at carrier frequencies in the bands ranging from LF to MMW. The work shows in further detail the new generation of on line programmable and reconfigurable Mixed Signal ASIC technology that made possible the innovative Synthetic Sensor. Furthermore the paper shows the ability of such technique to increase the Radio Telescope Array Antenna performances.

  11. Hybrid finite element and Brownian dynamics method for diffusion-controlled reactions.

    PubMed

    Bauler, Patricia; Huber, Gary A; McCammon, J Andrew

    2012-04-28

    Diffusion is often the rate determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. This paper proposes a new hybrid diffusion method that couples the strengths of each of these two methods. The method is derived for a general multidimensional system, and is presented using a basic test case for 1D linear and radially symmetric diffusion systems.

  12. FAST TRACK COMMUNICATION: Quantum anomalies and linear response theory

    NASA Astrophysics Data System (ADS)

    Sela, Itamar; Aisenberg, James; Kottos, Tsampikos; Cohen, Doron

    2010-08-01

    The analysis of diffusive energy spreading in quantized chaotic driven systems leads to a universal paradigm for the emergence of a quantum anomaly. In the classical approximation, a driven chaotic system exhibits stochastic-like diffusion in energy space with a coefficient D that is proportional to the intensity ɛ2 of the driving. In the corresponding quantized problem the coherent transitions are characterized by a generalized Wigner time tɛ, and a self-generated (intrinsic) dephasing process leads to nonlinear dependence of D on ɛ2.

  13. Nonclassical-light generation in a photonic-band-gap nonlinear planar waveguide

    NASA Astrophysics Data System (ADS)

    Peřina, Jan, Jr.; Sibilia, Concita; Tricca, Daniela; Bertolotti, Mario

    2004-10-01

    The optical parametric process occurring in a photonic-band-gap planar waveguide is studied from the point of view of nonclassical-light generation. The nonlinearly interacting optical fields are described by the generalized superposition of coherent signals and noise using the method of operator linear corrections to a classical strong solution. Scattered backward-propagating fields are taken into account. Squeezed light as well as light with sub-Poissonian statistics can be obtained in two-mode fields under the specified conditions.

  14. Gas evolution from spheres

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.

    1991-04-01

    Gas evolution from spherical solids or liquids where no convective processes are active is analyzed. Three problem classes are considered: (1) constant concentration boundary, (2) Henry's law (first order) boundary, and (3) Sieverts' law (second order) boundary. General expressions are derived for dimensionless times and transport parameters appropriate to each of the classes considered. However, in the second order case, the non-linearities of the problem require the presence of explicit dimensional variables in the solution. Sample problems are solved to illustrate the method.

  15. Synthesis of stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Thornton, W. A.

    1974-01-01

    Computer programs for the synthesis of shells of various configurations were developed. The conditions considered are: (1) uniform shells (mainly cones) using a membrane buckling analysis, (2) completely uniform shells (cones, spheres, toroidal segments) using linear bending prebuckling analysis, and (3) revision of second design process to reduce the number of design variables to about 30 by considering piecewise uniform designs. A perturbation formula was derived and this allows exact derivatives of the general buckling load to be computed with little additional computer time.

  16. Using Advanced Analysis Approaches to Complete Long-Term Evaluations of Natural Attenuation Processes on the Remediation of Dissolved Chlorinated Solvent Contamination

    DTIC Science & Technology

    2008-10-01

    and UTCHEM (Clement et al., 1998). While all four of these software packages use conservation of mass as the basic principle for tracking NAPL...simulate dissolution of a single NAPL component. UTCHEM can be used to simulate dissolution of a multiple NAPL components using either linear or first...parameters. No UTCHEM a/ 3D model, general purpose NAPL simulator. Yes Virulo a/ Probabilistic model for predicting leaching of viruses in unsaturated

  17. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  18. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy.

    PubMed

    Huppert, Theodore J

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.

  19. Cerebral microbleeds are associated with cognitive decline and dementia: the Rotterdam Study

    PubMed Central

    Akoudad, Saloua; Wolters, Frank J.; Viswanathan, Anand; de Bruijn, Renée F.; van der Lugt, Aad; Hofman, Albert; Koudstaal, Peter J.; Ikram, M. Arfan; Vernooij, Meike W.

    2018-01-01

    Importance Cerebral microbleeds are hypothesized downstream markers of brain damage caused by both vascular and amyloid pathological mechanisms. To date, it remains unclear whether their presence if associated with cognitive deterioration in the general population. Objective To determine whether microbleeds, and more specifically microbleed count and location, associate with an increased risk of cognitive impairment and dementia in the general population. Design Prospective population-based Rotterdam Study. Setting General community. Participants In the Rotterdam Study, we assessed presence, number, and location of microbleeds at baseline (2005–2011) on brain MRI of 4,841 participants aged ≥45 years. Participants underwent neuropsychological testing at two time points on average 5.9 years (SD 0.6) apart, and were followed for incident dementia throughout the study period until 2013. The association of microbleeds with cognitive decline and dementia was studied using multiple linear regression, linear mixed effects modeling, and Cox proportional hazards. Exposure cerebral microbleed presence, location, and number. Main outcomes cognitive decline and dementia. Results Microbleed prevalence was 15.3% (median count 1 [1–88]). Presence of >4 microbleeds associated with cognitive decline. Lobar (with or without cerebellar) microbleeds were associated with decline in executive functions, information processing, and memory function, whereas microbleeds in other brain regions were associated with decline in information processing and motor speed. After mean follow-up of 4.8 years (SD 1.4), 72 people developed dementia, of whom 53 had Alzheimer’s disease. Presence of microbleeds was associated with an increased risk of dementia (age, sex, education adjusted HR 2.02, 95%CI 1.25;3.24), including Alzheimer’s dementia (HR 2.10, 95%CI 1.21;3.64). Conclusions and relevance In the general population, a high microbleed count associated with an increased risk of cognitive deterioration and dementia. Microbleeds thus mark the presence of diffuse vascular and neurodegenerative brain damage. PMID:27271785

  20. Implementing general quantum measurements on linear optical and solid-state qubits

    NASA Astrophysics Data System (ADS)

    Ota, Yukihiro; Ashhab, Sahel; Nori, Franco

    2013-03-01

    We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.

  1. Linear systems with structure group and their feedback invariants

    NASA Technical Reports Server (NTRS)

    Martin, C.; Hermann, R.

    1977-01-01

    A general method described by Hermann and Martin (1976) for the study of the feedback invariants of linear systems is considered. It is shown that this method, which makes use of ideas of topology and algebraic geometry, is very useful in the investigation of feedback problems for which the classical methods are not suitable. The transfer function as a curve in the Grassmanian is examined. The general concepts studied in the context of specific systems and applications are organized in terms of the theory of Lie groups and algebraic geometry. Attention is given to linear systems which have a structure group, linear mechanical systems, and feedback invariants. The investigation shows that Lie group techniques are powerful and useful tools for analysis of the feedback structure of linear systems.

  2. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  3. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  4. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    EPA Science Inventory

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  5. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  6. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  7. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  8. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  9. Energy-momentum tensors in linearized Einstein's theory and massive gravity: The question of uniqueness

    NASA Astrophysics Data System (ADS)

    Bičák, Jiří; Schmidt, Josef

    2016-01-01

    The question of the uniqueness of energy-momentum tensors in the linearized general relativity and in the linear massive gravity is analyzed without using variational techniques. We start from a natural ansatz for the form of the tensor (for example, that it is a linear combination of the terms quadratic in the first derivatives), and require it to be conserved as a consequence of field equations. In the case of the linear gravity in a general gauge we find a four-parametric system of conserved second-rank tensors which contains a unique symmetric tensor. This turns out to be the linearized Landau-Lifshitz pseudotensor employed often in full general relativity. We elucidate the relation of the four-parametric system to the expression proposed recently by Butcher et al. "on physical grounds" in harmonic gauge, and we show that the results coincide in the case of high-frequency waves in vacuum after a suitable averaging. In the massive gravity we show how one can arrive at the expression which coincides with the "generalized linear symmetric Landau-Lifshitz" tensor. However, there exists another uniquely given simpler symmetric tensor which can be obtained by adding the divergence of a suitable superpotential to the canonical energy-momentum tensor following from the Fierz-Pauli action. In contrast to the symmetric tensor derived by the Belinfante procedure which involves the second derivatives of the field variables, this expression contains only the field and its first derivatives. It is simpler than the generalized Landau-Lifshitz tensor but both yield the same total quantities since they differ by the divergence of a superpotential. We also discuss the role of the gauge conditions in the proofs of the uniqueness. In the Appendix, the symbolic tensor manipulation software cadabra is briefly described. It is very effective in obtaining various results which would otherwise require lengthy calculations.

  10. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  11. Visuo-manual tracking: does intermittent control with aperiodic sampling explain linear power and non-linear remnant without sensorimotor noise?

    PubMed

    Gollee, Henrik; Gawthrop, Peter J; Lakie, Martin; Loram, Ian D

    2017-11-01

    A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non-linearly related to the input, attributed to sensorimotor noise. Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200-500 ms periods of irresponsiveness to sensory input making the control process intrinsically non-linear. This evidence calls for re-examination of the extent to which random sensorimotor noise is required to explain the non-linear remnant. This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds. Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non-linear remnant resulting from random sensorimotor noise from multiple sources, and non-linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non-linear remnant using noise or non-linear transformations? (ii) Can non-linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed ways) manually controlled two systems (1st and 2nd order) subject to a periodic multi-sine disturbance. Joystick power was analysed using three models, continuous-linear-control (CC), continuous-linear-control with calculated noise spectrum (CCN), and intermittent control with aperiodic sampling triggered by prediction error thresholds (IC). Unlike the linear mechanism, the intermittent control mechanism explained the majority of total power (linear and remnant) (77-87% vs. 8-48%, IC vs. CC). Between conditions, IC used thresholds and distributions of open loop intervals consistent with, respectively, instructions and previous measured, model independent values; whereas CCN required changes in noise spectrum deviating from broadband, signal dependent noise. We conclude that manual tracking uses open loop predictive control with aperiodic sampling. Because aperiodic sampling is inherent to serial decision making within previously identified, specific frontal, striatal and parietal networks we suggest that these structures are intimately involved in visuo-manual tracking. © 2017 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  12. Asymptotic aspect of derivations in Banach algebras.

    PubMed

    Roh, Jaiok; Chang, Ick-Soon

    2017-01-01

    We prove that every approximate linear left derivation on a semisimple Banach algebra is continuous. Also, we consider linear derivations on Banach algebras and we first study the conditions for a linear derivation on a Banach algebra. Then we examine the functional inequalities related to a linear derivation and their stability. We finally take central linear derivations with radical ranges on semiprime Banach algebras and a continuous linear generalized left derivation on a semisimple Banach algebra.

  13. Plant uptake of elements in soil and pore water: field observations versus model assumptions.

    PubMed

    Raguž, Veronika; Jarsjö, Jerker; Grolander, Sara; Lindborg, Regina; Avila, Rodolfo

    2013-09-15

    Contaminant concentrations in various edible plant parts transfer hazardous substances from polluted areas to animals and humans. Thus, the accurate prediction of plant uptake of elements is of significant importance. The processes involved contain many interacting factors and are, as such, complex. In contrast, the most common way to currently quantify element transfer from soils into plants is relatively simple, using an empirical soil-to-plant transfer factor (TF). This practice is based on theoretical assumptions that have been previously shown to not generally be valid. Using field data on concentrations of 61 basic elements in spring barley, soil and pore water at four agricultural sites in mid-eastern Sweden, we quantify element-specific TFs. Our aim is to investigate to which extent observed element-specific uptake is consistent with TF model assumptions and to which extent TF's can be used to predict observed differences in concentrations between different plant parts (root, stem and ear). Results show that for most elements, plant-ear concentrations are not linearly related to bulk soil concentrations, which is congruent with previous studies. This behaviour violates a basic TF model assumption of linearity. However, substantially better linear correlations are found when weighted average element concentrations in whole plants are used for TF estimation. The highest number of linearly-behaving elements was found when relating average plant concentrations to soil pore-water concentrations. In contrast to other elements, essential elements (micronutrients and macronutrients) exhibited relatively small differences in concentration between different plant parts. Generally, the TF model was shown to work reasonably well for micronutrients, whereas it did not for macronutrients. The results also suggest that plant uptake of elements from sources other than the soil compartment (e.g. from air) may be non-negligible. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Apparent multifractality of self-similar Lévy processes

    NASA Astrophysics Data System (ADS)

    Zamparo, Marco

    2017-07-01

    Scaling properties of time series are usually studied in terms of the scaling laws of empirical moments, which are the time average estimates of moments of the dynamic variable. Nonlinearities in the scaling function of empirical moments are generally regarded as a sign of multifractality in the data. We show that, except for the Brownian motion, this method fails to disclose the correct monofractal nature of self-similar Lévy processes. We prove that for this class of processes it produces apparent multifractality characterised by a piecewise-linear scaling function with two different regimes, which match at the stability index of the considered process. This result is motivated by previous numerical evidence. It is obtained by introducing an appropriate stochastic normalisation which is able to cure empirical moments, without hiding their dependence on time, when moments they aim at estimating do not exist.

  15. Ultrasonic probing of the fracture process zone in rock using surface waves

    NASA Technical Reports Server (NTRS)

    Swanson, P. L.; Spetzler, H.

    1984-01-01

    A microcrack process zone is frequently suggested to accompany macrofractures in rock and play an important role in the resistance to fracture propagation. Attenuation of surface waves propagating through mode I fractures in wedge-loaded double-cantilever beam specimens of Westerly granite has been recorded in an attempt to characterize the structure of the fracture process zone. The ultrasonic measurements do not support the generally accepted model of a macroscopic fracture that incrementally propagates with the accompaniment of a cloud of microcracks. Instead, fractures in Westerly granite appear to form as gradually separating surfaces within a zone having a width of a few millimeters and a length of several tens of millimeters. A fracture process zone of this size would necessitate the use of meter-sized specimens in order for linear elastic fracture mechanics to be applicable.

  16. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livadiotis, G., E-mail: glivadiotis@swri.edu

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less

  17. Cohort Differences in Cognitive Aging in the Longitudinal Aging Study Amsterdam.

    PubMed

    Brailean, Anamaria; Huisman, Martijn; Prince, Martin; Prina, A Matthew; Deeg, Dorly J H; Comijs, Hannie

    2016-09-30

    This study aims to examine cohort differences in cognitive performance and rates of change in episodic memory, processing speed, inductive reasoning, and general cognitive performance and to investigate whether these cohort effects may be accounted for by education attainment. The first cohort (N = 705) was born between 1920 and 1930, whereas the second cohort (N = 646) was born between 1931 and 1941. Both birth cohorts were aged 65 to 75 years at baseline and were followed up 3 and 6 years later. Data were analyzed using linear mixed models. The later born cohort had better general cognitive performance, inductive reasoning, and processing speed at baseline, but cohort differences in inductive reasoning and general cognitive performance disappeared after adjusting for education. The later born cohort showed steeper decline in processing speed. Memory decline was steeper in the earlier born cohort but only from Time 1 to Time 3 when the same memory test was administered. Education did not account for cohort differences in cognitive decline. The later born cohort showed better initial performance in certain cognitive abilities, but no better preservation of cognitive abilities overtime compared with the earlier born cohort. These findings carry implications for healthy cognitive aging. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America.

  18. Superposition of Polytropes in the Inner Heliosheath

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2016-03-01

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  19. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  20. General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.

    2011-01-01

    The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.

  1. Friendship Dissolution Within Social Networks Modeled Through Multilevel Event History Analysis

    PubMed Central

    Dean, Danielle O.; Bauer, Daniel J.; Prinstein, Mitchell J.

    2018-01-01

    A social network perspective can bring important insight into the processes that shape human behavior. Longitudinal social network data, measuring relations between individuals over time, has become increasingly common—as have the methods available to analyze such data. A friendship duration model utilizing discrete-time multilevel survival analysis with a multiple membership random effect structure is developed and applied here to study the processes leading to undirected friendship dissolution within a larger social network. While the modeling framework is introduced in terms of understanding friendship dissolution, it can be used to understand microlevel dynamics of a social network more generally. These models can be fit with standard generalized linear mixed-model software, after transforming the data to a pair-period data set. An empirical example highlights how the model can be applied to understand the processes leading to friendship dissolution between high school students, and a simulation study is used to test the use of the modeling framework under representative conditions that would be found in social network data. Advantages of the modeling framework are highlighted, and potential limitations and future directions are discussed. PMID:28463022

  2. Image processing using Gallium Arsenide (GaAs) technology

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.

    1989-01-01

    The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.

  3. Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes

    NASA Astrophysics Data System (ADS)

    Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.

    As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.

  4. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  5. Quantum learning of classical stochastic processes: The completely positive realization problem

    NASA Astrophysics Data System (ADS)

    Monràs, Alex; Winter, Andreas

    2016-01-01

    Among several tasks in Machine Learning, a specially important one is the problem of inferring the latent variables of a system and their causal relations with the observed behavior. A paradigmatic instance of this is the task of inferring the hidden Markov model underlying a given stochastic process. This is known as the positive realization problem (PRP), [L. Benvenuti and L. Farina, IEEE Trans. Autom. Control 49(5), 651-664 (2004)] and constitutes a central problem in machine learning. The PRP and its solutions have far-reaching consequences in many areas of systems and control theory, and is nowadays an important piece in the broad field of positive systems theory. We consider the scenario where the latent variables are quantum (i.e., quantum states of a finite-dimensional system) and the system dynamics is constrained only by physical transformations on the quantum system. The observable dynamics is then described by a quantum instrument, and the task is to determine which quantum instrument — if any — yields the process at hand by iterative application. We take as a starting point the theory of quasi-realizations, whence a description of the dynamics of the process is given in terms of linear maps on state vectors and probabilities are given by linear functionals on the state vectors. This description, despite its remarkable resemblance with the hidden Markov model, or the iterated quantum instrument, is however devoid of any stochastic or quantum mechanical interpretation, as said maps fail to satisfy any positivity conditions. The completely positive realization problem then consists in determining whether an equivalent quantum mechanical description of the same process exists. We generalize some key results of stochastic realization theory, and show that the problem has deep connections with operator systems theory, giving possible insight to the lifting problem in quotient operator systems. Our results have potential applications in quantum machine learning, device-independent characterization and reverse-engineering of stochastic processes and quantum processors, and more generally, of dynamical processes with quantum memory [M. Guţă, Phys. Rev. A 83(6), 062324 (2011); M. Guţă and N. Yamamoto, e-print arXiv:1303.3771(2013)].

  6. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  7. Performance analysis of a GPS equipment by general linear models approach

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena; Gonçalves, Fernando M.; Correia, Anacleto

    2017-06-01

    One of the major challenges in processing high-accurate long baselines is the presence of un-modelled ionospheric and tropospheric delays. There are effective mitigation strategies for ionospheric biases, such as the ionosphere-free linear combination of L1 and L2 carrier-phase, which can remove about 98% of the first-order ionospheric biases. With few exceptions this was the solution found by LGO for the 11760 baselines processed in this research. Therefore, for successful results, the appropriated approach to the mitigation of biases due to tropospheric delays is vital. The main aim of the investigations presented in this work was to evaluate the improvements, or not, of the rate of baselines successfully produced by adopting an advanced tropospheric bias mitigation strategy as opposed to a sample tropospheric bias mitigation approach. In both cases LGO uses as a priori tropospheric model the simplified Hopfield model, improved in the first case with a zenith tropospheric scale factor per station. Being aware that 1D and 2D present different behaviors, both cases are analyzed individually with each strategy.

  8. Large-amplitude nuclear motion formulated in terms of dissipation of quantum fluctuations

    NASA Astrophysics Data System (ADS)

    Kuzyakin, R. A.; Sargsyan, V. V.; Adamian, G. G.; Antonenko, N. V.

    2017-01-01

    The potential-barrier penetrability and quasi-stationary thermal-decay rate of a metastable state are formulated in terms of microscopic quantum diffusion. Apart from linear coupling in momentum between the collective and internal subsystems, the formalism embraces the more general case of linear couplings in both the momentum and the coordinates. The developed formalism is then used for describing the process of projectile-nucleus capture by a target nucleus at incident energies near and below the Coulomb barrier. The capture partial probability, which determines the cross section for formation of a dinuclear system, is derived in analytical form. The total and partial capture cross sections, mean and root-mean-square angular momenta of the formed dinuclear system, astrophysical -factors, logarithmic derivatives, and barrier distributions are derived for various reactions. Also investigated are the effects of nuclear static deformation and neutron transfer between the interacting nuclei on the capture cross section and its isotopic dependence, and the entrance-channel effects on the capture process. The results of calculations for reactions involving both spherical and deformed nuclei are in good agreement with available experimental data.

  9. Spatial and temporal distribution of aliphatic hydrocarbons and linear alkylbenzenes in the particulate phase from a subtropical estuary (Guaratuba Bay, SW Atlantic) under seasonal population fluctuation.

    PubMed

    Dauner, Ana Lúcia L; Martins, César C

    2015-12-01

    Guaratuba Bay, a subtropical estuary located in the SW Atlantic, is under variable anthropogenic pressure throughout the year. Samples of surficial suspended particulate matter (SPM) were collected at 22 sites during three different periods to evaluate the temporal and spatial variability of aliphatic hydrocarbons (AHs) and linear alkylbenzenes (LABs). These compounds were determined by gas chromatography with flame ionization detection (GC-FID) and mass spectrometry (GC/MS). The spatial distributions of both compound classes were similar and varied among the sampling campaigns. Generally, the highest concentrations were observed during the austral summer, highlighting the importance of the increased human influence during this season. The compound distributions were also affected by the natural geochemical processes of organic matter accumulation. AHs were associated with petroleum, derived from boat and vehicle traffic, and biogenic sources, related to mangrove forests and autochthonous production. The LAB composition evidenced preferential degradation processes during the austral summer. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  11. Short-wavelength plasma turbulence and temperature anisotropy instabilities: Recent computational progress

    DOE PAGES

    Gary, S. Peter

    2015-04-06

    Plasma turbulence consists of an ensemble of enhanced, broadband electromagnetic fluctuations, typically driven by multi-wave interactions which transfer energy in wavevector space via non- linear cascade processes. In addition, temperature anisotropy instabilities in collisionless plasmas are driven by quasi-linear wave–particle interactions which transfer particle kinetic energy to field fluctuation energy; the resulting enhanced fluctuations are typically narrowband in wavevector magnitude and direction. Whatever their sources, short-wavelength fluctuations are those at which charged particle kinetic, that is, velocity-space, properties are important; these are generally wavelengths of the order of or shorter than the ion inertial length or the thermal ion gyroradius.more » The purpose of this review is to summarize and interpret recent computational results concerning short-wavelength plasma turbulence, short-wavelength temperature anisotropy instabilities and relationships between the two phenomena.« less

  12. Event detection and localization for small mobile robots using reservoir computing.

    PubMed

    Antonelo, E A; Schrauwen, B; Stroobandt, D

    2008-08-01

    Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.

  13. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    NASA Astrophysics Data System (ADS)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  14. Synoptic evaluation of scale-dependent metrics for hydrographic line feature geometry

    USGS Publications Warehouse

    Stanislawski, Larry V.; Buttenfield, Barbara P.; Raposo, Paulo; Cameron, Madeline; Falgout, Jeff T.

    2015-01-01

    Methods of acquisition and feature simplification for vector feature data impact cartographic representations and scientific investigations of these data, and are therefore important considerations for geographic information science (Haunert and Sester 2008). After initial collection, linear features may be simplified to reduce excessive detail or to furnish a reduced-scale version of the features through cartographic generalization (Regnauld and McMaster 2008, Stanislawski et al. 2014). A variety of algorithms exist to simplify linear cartographic features, and all of the methods affect the positional accuracy of the features (Shahriari and Tao 2002, Regnauld and McMaster 2008, Stanislawski et al. 2012). In general, simplification operations are controlled by one or more tolerance parameters that limit the amount of positional change the operation can make to features. Using a single tolerance value can have varying levels of positional change on features; depending on local shape, texture, or geometric characteristics of the original features (McMaster and Shea 1992, Shahriari and Tao 2002, Buttenfield et al. 2010). Consequently, numerous researchers have advocated calibration of simplification parameters to control quantifiable properties of resulting changes to the features (Li and Openshaw 1990, Raposo 2013, Tobler 1988, Veregin 2000, and Buttenfield, 1986, 1989).This research identifies relations between local topographic conditions and geometric characteristics of linear features that are available in the National Hydrography Dataset (NHD). The NHD is a comprehensive vector dataset of surface 18 th ICA Workshop on Generalisation and Multiple Representation, Rio de Janiero, Brazil 2015 2 water features within the United States that is maintained by the U.S. Geological Survey (USGS). In this paper, geometric characteristics of cartographic representations for natural stream and river features are summarized for subbasin watersheds within entire regions of the conterminous United States and compared to topographic metrics. A concurrent processing workflow is implemented using a Linux high-performance computing cluster to simultaneously process multiple subbasins, and thereby complete the work in a fraction of the time required for a single-process environment. In addition, similar metrics are generated for several levels of simplification of the hydrographic features to quantify the effects of simplification over the various landscape conditions. Objectives of this exploratory investigation are to quantify geometric characteristics of linear hydrographic features over the various terrain conditions within the conterminous United States and thereby illuminate relations between stream geomorphological conditions and cartographic representation. The synoptic view of these characteristics over regional watersheds that is afforded through concurrent processing, in conjunction with terrain conditions, may reveal patterns for classifying cartographic stream features into stream geomorphological classes. Furthermore, the synoptic measurement of the amount of change in geometric characteristics caused by the several levels of simplification can enable estimation of tolerance values that appropriately control simplification-induced geometric change of the cartographic features within the various geomorphological classes in the country. Hence, these empirically derived rules or relations could help generate multiscale-representations of features through automated generalization that adequately maintain surface drainage variations and patterns reflective of the natural stream geomorphological conditions across the country.

  15. TI-59 Programs for Multiple Regression.

    DTIC Science & Technology

    1980-05-01

    general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates

  16. Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Fan, Lei

    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.

  17. A General Linear Model Approach to Adjusting the Cumulative GPA.

    ERIC Educational Resources Information Center

    Young, John W.

    A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…

  18. A Comparison between Linear IRT Observed-Score Equating and Levine Observed-Score Equating under the Generalized Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen

    2012-01-01

    In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…

  19. Statistical inference for template aging

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.

    2006-04-01

    A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.

  20. A reduced successive quadratic programming strategy for errors-in-variables estimation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.

    Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less

  1. Changes in beverage consumption among adolescents from public schools in the first decade of the century XXI.

    PubMed

    Monteiro, Luana Silva; Vasconcelos, Thaís Meirelles de; Veiga, Gloria Valéria da; Pereira, Rosângela Alves

    2016-01-01

    To evaluate the changes in beverage consumption among adolescents between 2003 and 2008. Two school-based cross-sectional studies were carried out with public school students (12 to 19 years-old) from Niterói, Rio de Janeiro, Brazil. Data from three food records were used to estimate daily, weekdays and weekend average consumption (volume and percent contribution for total daily energy intake) of milk and milk-based beverages, sugar sweetened beverages, fresh squeezed fruit juices, caffeinated and alcoholic beverages. Beverage consumption age-adjusted means for weekdays and weekends were compared using linear regression (Generalized Linear Models - GLM). A total of 433 adolescents were examined in 2003, and 510 in 2008. The prevalence of overweight was 17% in 2003 and 22% in 2008 (p > 0.05). Milk was the most consumed beverage, being reported by 89% of adolescents, followed by sodas (75%). In general, in the five-year period, there was an increase in the prevalence of consumption of alcoholic drinks, guarana syrup refreshment, and processed fruit drinks, especially on weekdays. The soft drink was the largest contributor to the total energy consumption, corresponding on average to 4% of daily energy intake. The main changes in the beverage consumption among adolescents from Niterói, in the first decade of the XXI century, were the tendency to reduce the consumption of milk and the increase in the consumption of processed and alcoholic beverages.

  2. Reformulating the Schrödinger equation as a Shabat-Zakharov system

    NASA Astrophysics Data System (ADS)

    Boonserm, Petarpa; Visser, Matt

    2010-02-01

    We reformulate the second-order Schrödinger equation as a set of two coupled first-order differential equations, a so-called "Shabat-Zakharov system" (sometimes called a "Zakharov-Shabat" system). There is considerable flexibility in this approach, and we emphasize the utility of introducing an "auxiliary condition" or "gauge condition" that is used to cut down the degrees of freedom. Using this formalism, we derive the explicit (but formal) general solution to the Schrödinger equation. The general solution depends on three arbitrarily chosen functions, and a path-ordered exponential matrix. If one considers path ordering to be an "elementary" process, then this represents complete quadrature, albeit formal, of the second-order linear ordinary differential equation.

  3. Finite-time H∞ filtering for non-linear stochastic systems

    NASA Astrophysics Data System (ADS)

    Hou, Mingzhe; Deng, Zongquan; Duan, Guangren

    2016-09-01

    This paper describes the robust H∞ filtering analysis and the synthesis of general non-linear stochastic systems with finite settling time. We assume that the system dynamic is modelled by Itô-type stochastic differential equations of which the state and the measurement are corrupted by state-dependent noises and exogenous disturbances. A sufficient condition for non-linear stochastic systems to have the finite-time H∞ performance with gain less than or equal to a prescribed positive number is established in terms of a certain Hamilton-Jacobi inequality. Based on this result, the existence of a finite-time H∞ filter is given for the general non-linear stochastic system by a second-order non-linear partial differential inequality, and the filter can be obtained by solving this inequality. The effectiveness of the obtained result is illustrated by a numerical example.

  4. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  5. Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals

    PubMed Central

    2011-01-01

    Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266

  6. Linear shaped charge

    DOEpatents

    Peterson, David; Stofleth, Jerome H.; Saul, Venner W.

    2017-07-11

    Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.

  7. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  8. High-Dimensional Quantum Information Processing with Linear Optics

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for carrying out quantum walks on arbitrary graph structures, a powerful tool for any quantum computer. It is shown that the novel architecture provides new, efficient capabilities for the optical quantum simulation of Hamiltonians and topologically protected states. Further, these simulations use exponentially fewer resources than feedforward techniques, scale linearly to higher-dimensional systems, and use only linear optics, thus offering a concrete experimentally achievable implementation of graphical models of discrete-time quantum systems.

  9. Genetic parameters for racing records in trotters using linear and generalized linear models.

    PubMed

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  10. Seasonal control skylight glazing panel with passive solar energy switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, J.V.

    1983-10-25

    A substantially transparent one-piece glazing panel is provided for generally horizontal mounting in a skylight. The panel is comprised of an repeated pattern of two alternating and contiguous linear optical elements; a first optical element being an upstanding generally right-triangular linear prism, and the second optical element being an upward-facing plano-cylindrical lens in which the planar surface is reflectively opaque and is generally in the same plane as the base of the triangular prism.

  11. Quantifying the Contribution of Wind-Driven Linear Response to the Seasonal and Interannual Variability of Amoc Volume Transports Across 26.5ºN

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; von Storch, J. S.; Haak, H.; Nakayama, K.; Marotzke, J.

    2014-12-01

    Surface wind stress is considered to be an important forcing of the seasonal and interannual variability of Atlantic Meridional Overturning Circulation (AMOC) volume transports. A recent study showed that even linear response to wind forcing captures observed features of the mean seasonal cycle. However, the study did not assess the contribution of wind-driven linear response in realistic conditions against the RAPID/MOCHA array observation or Ocean General Circulation Model (OGCM) simulations, because it applied a linear two-layer model to the Atlantic assuming constant upper layer thickness and density difference across the interface. Here, we quantify the contribution of wind-driven linear response to the seasonal and interannual variability of AMOC transports by comparing wind-driven linear simulations under realistic continuous stratification against the RAPID observation and OCGM (MPI-OM) simulations with 0.4º resolution (TP04) and 0.1º resolution (STORM). All the linear and MPI-OM simulations capture more than 60% of the variance in the observed mean seasonal cycle of the Upper Mid-Ocean (UMO) and Florida Strait (FS) transports, two components of the upper branch of the AMOC. The linear and TP04 simulations also capture 25-40% of the variance in the observed transport time series between Apr 2004 and Oct 2012; the STORM simulation does not capture the observed variance because of the stochastic signal in both datasets. Comparison of half-overlapping 12-month-long segments reveals some periods when the linear and TP04 simulations capture 40-60% of the observed variance, as well as other periods when the simulations capture only 0-20% of the variance. These results show that wind-driven linear response is a major contributor to the seasonal and interannual variability of the UMO and FS transports, and that its contribution varies in an interannual timescale, probably due to the variability of stochastic processes.

  12. Cerebral processing of auditory stimuli in patients with irritable bowel syndrome

    PubMed Central

    Andresen, Viola; Poellinger, Alexander; Tsrouya, Chedwa; Bach, Dominik; Stroh, Albrecht; Foerschler, Annette; Georgiewa, Petra; Schmidtmann, Marco; van der Voort, Ivo R; Kobelt, Peter; Zimmer, Claus; Wiedenmann, Bertram; Klapp, Burghard F; Monnikes, Hubert

    2006-01-01

    AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms, we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality. METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans. Statistical maps were constructed using the general linear model. RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds. To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations. CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affective reactivity, possibly associated with the psychological comorbidity often found in IBS patients. PMID:16586541

  13. Lower hybrid to whistler mode conversion on a density striation

    NASA Astrophysics Data System (ADS)

    Camporeale, E.; Delzanno, G. L.; Colestock, P.

    2012-10-01

    When a wave packet composed of short wavelength lower hybrid modes traveling in an homogeneous plasma region encounters an inhomogeneity, it can resonantly excite long wavelength whistler waves via a linear mechanism known as mode conversion. An enhancement of lower hybrid/whistler activity has been often observed by sounding rockets and satellites in the presence of density depletions (striations) in the upper ionosphere. We address here the process of linear mode conversion of lower hybrid to whistler waves, mediated by a density striation, using a scalar-field formalism (in the limit of cold plasma linear theory) which we solve numerically. We show that the mode conversion can effectively transfer a large amount of energy from the short to the long wavelength modes. We also study how the efficiency scales by changing the properties (width and amplitude) of the density striation. We present a general criterion for the width of the striation that, if fulfilled, maximizes the conversion efficiency. Such a criterion could provide an interpretation of recent laboratory experiments carried out on the Large Plasma Device at UCLA.

  14. Linear excitation of the trapped waves by an incident wave

    NASA Astrophysics Data System (ADS)

    Postacioglu, Nazmi; Sinan Özeren, M.

    2016-04-01

    The excitation of the trapped waves by coastal events such as landslides has been extensively studied. The events in the open sea have in general larger magnitude. However the incident waves produced by these events in the open sea can only excite the the trapped waves through no linearity if the isobaths are straight lines that are in parallel with the coastline. We will show that the imperfections of the coastline can couple the incident and trapped waves using only linear processes. The Coriolis force is neglected in this work . Accordingly the trapped waves are consequence of uneven bathimetry. In the bathimetry we consider, the sea is divided into zones of constant depth and the boundaries between the zones are a family of hyperbolas. The boundary conditions between the zones will lead to an integral equation for the source distribution on the boundaries. The solution will contain both radiating and trapped waves. The trapped waves pose a serious threat for the coastal communities as they can travel long distances along the coastline without losing their energy through geometrical spreading.

  15. Regularized learning of linear ordered-statistic constant false alarm rate filters (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.

    2017-05-01

    The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.

  16. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  17. A biconjugate gradient type algorithm on massively parallel architectures

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Hochbruck, Marlis

    1991-01-01

    The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.

  18. From master slave interferometry to complex master slave interferometry: theoretical work

    NASA Astrophysics Data System (ADS)

    Rivet, Sylvain; Bradu, Adrian; Maria, Michael; Feuchter, Thomas; Leick, Lasse; Podoleanu, Adrian

    2018-03-01

    A general theoretical framework is described to obtain the advantages and the drawbacks of two novel Fourier Domain Optical Coherence Tomography (OCT) methods denoted as Master/Slave Interferometry (MSI) and its extension denoted as Complex Master/Slave Interferometry (CMSI). Instead of linearizing the digital data representing the channeled spectrum before a Fourier transform can be applied to it (as in OCT standard methods), channeled spectrum is decomposed on the basis of local oscillations. This replaces the need for linearization, generally time consuming, before any calculation of the depth profile in the range of interest. In this model two functions, g and h, are introduced. The function g describes the modulation chirp of the channeled spectrum signal due to nonlinearities in the decoding process from wavenumber to time. The function h describes the dispersion in the interferometer. The utilization of these two functions brings two major improvements to previous implementations of the MSI method. The paper details the steps to obtain the functions g and h, and represents the CMSI in a matrix formulation that enables to implement easily this method in LabVIEW by using parallel programming with multi-cores.

  19. Robust biological parametric mapping: an improved technique for multimodal brain image analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.

    2011-03-01

    Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.

  20. The Bayesian group lasso for confounded spatial data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  1. Microscopic theory of linear light scattering from mesoscopic media and in near-field optics.

    PubMed

    Keller, Ole

    2005-08-01

    On the basis of quantum mechanical response theory a microscopic propagator theory of linear light scattering from mesoscopic systems is presented. The central integral equation problem is transferred to a matrix equation problem by discretization in transitions between pairs of (many-body) energy eigenstates. The local-field calculation which appears from this approach is valid down to the microscopic region. Previous theories based on the (macroscopic) dielectric constant concept make use of spatial (geometrical) discretization and cannot in general be trusted on the mesoscopic length scale. The present theory can be applied to light scattering studies in near-field optics. After a brief discussion of the macroscopic integral equation problem a microscopic potential description of the scattering process is established. In combination with the use of microscopic electromagnetic propagators the formalism allows one to make contact to the macroscopic theory of light scattering and to the spatial photon localization problem. The quantum structure of the microscopic conductivity response tensor enables one to establish a clear physical picture of the origin of local-field phenomena in mesoscopic and near-field optics. The Huygens scalar propagator formalism is revisited and its generality in microscopic physics pointed out.

  2. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  3. Mathematical Modeling of Intestinal Iron Absorption Using Genetic Programming

    PubMed Central

    Colins, Andrea; Gerdtzen, Ziomara P.; Nuñez, Marco T.; Salgado, J. Cristian

    2017-01-01

    Iron is a trace metal, key for the development of living organisms. Its absorption process is complex and highly regulated at the transcriptional, translational and systemic levels. Recently, the internalization of the DMT1 transporter has been proposed as an additional regulatory mechanism at the intestinal level, associated to the mucosal block phenomenon. The short-term effect of iron exposure in apical uptake and initial absorption rates was studied in Caco-2 cells at different apical iron concentrations, using both an experimental approach and a mathematical modeling framework. This is the first report of short-term studies for this system. A non-linear behavior in the apical uptake dynamics was observed, which does not follow the classic saturation dynamics of traditional biochemical models. We propose a method for developing mathematical models for complex systems, based on a genetic programming algorithm. The algorithm is aimed at obtaining models with a high predictive capacity, and considers an additional parameter fitting stage and an additional Jackknife stage for estimating the generalization error. We developed a model for the iron uptake system with a higher predictive capacity than classic biochemical models. This was observed both with the apical uptake dataset used for generating the model and with an independent initial rates dataset used to test the predictive capacity of the model. The model obtained is a function of time and the initial apical iron concentration, with a linear component that captures the global tendency of the system, and a non-linear component that can be associated to the movement of DMT1 transporters. The model presented in this paper allows the detailed analysis, interpretation of experimental data, and identification of key relevant components for this complex biological process. This general method holds great potential for application to the elucidation of biological mechanisms and their key components in other complex systems. PMID:28072870

  4. Boundary Conditions for Diffusion-Mediated Processes within Linear Nanopores: Exact Treatment of Coupling to an Equilibrated External Fluid

    DOE PAGES

    Garcia, Andres; Evans, James W.

    2017-04-03

    In this paper, we consider a variety of diffusion-mediated processes occurring within linear nanopores, but which involve coupling to an equilibrated external fluid through adsorption and desorption. By determining adsorption and desorption rates through a set of tailored simulations, and by exploiting a spatial Markov property of the models, we develop a formulation for performing efficient pore-only simulations of these processes. Coupling to the external fluid is described exactly through appropriate nontrivial boundary conditions at the pore openings. This formalism is applied to analyze the following: (i) tracer counter permeation (TCP) where different labeled particles adsorb into opposite ends ofmore » the pore and establish a nonequilibrium steady state; (ii) tracer exchange (TE) with exchange of differently labeled particles within and outside the pore; (iii) catalytic conversion reactions where a reactant in the external fluid adsorbs into the pore and converts to a product which may desorb. The TCP analysis also generates a position-dependent generalized tracer diffusion coefficient, the form of which controls behavior in the TE and catalytic conversion processes. We focus on the regime of single-file diffusion within the pore which produces the strongest correlations and largest deviations from mean-field type behavior. Finally, behavior is quantified precisely via kinetic Monte Carlo simulations but is also captured with appropriate analytic treatments.« less

  5. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Grip Strength Is Associated With Cognitive Performance in Schizophrenia and the General Population: A UK Biobank Study of 476559 Participants.

    PubMed

    Firth, Joseph; Stubbs, Brendon; Vancampfort, Davy; Firth, Josh A; Large, Matthew; Rosenbaum, Simon; Hallgren, Mats; Ward, Philip B; Sarris, Jerome; Yung, Alison R

    2018-06-06

    Handgrip strength may provide an easily-administered marker of cognitive functional status. However, further population-scale research examining relationships between grip strength and cognitive performance across multiple domains is needed. Additionally, relationships between grip strength and cognitive functioning in people with schizophrenia, who frequently experience cognitive deficits, has yet to be explored. Baseline data from the UK Biobank (2007-2010) was analyzed; including 475397 individuals from the general population, and 1162 individuals with schizophrenia. Linear mixed models and generalized linear mixed models were used to assess the relationship between grip strength and 5 cognitive domains (visual memory, reaction time, reasoning, prospective memory, and number memory), controlling for age, gender, bodyweight, education, and geographical region. In the general population, maximal grip strength was positively and significantly related to visual memory (coefficient [coeff] = -0.1601, standard error [SE] = 0.003), reaction time (coeff = -0.0346, SE = 0.0004), reasoning (coeff = 0.2304, SE = 0.0079), number memory (coeff = 0.1616, SE = 0.0092), and prospective memory (coeff = 0.3486, SE = 0.0092: all P < .001). In the schizophrenia sample, grip strength was strongly related to visual memory (coeff = -0.155, SE = 0.042, P < .001) and reaction time (coeff = -0.049, SE = 0.009, P < .001), while prospective memory approached statistical significance (coeff = 0.233, SE = 0.132, P = .078), and no statistically significant association was found with number memory and reasoning (P > .1). Grip strength is significantly associated with cognitive functioning in the general population and individuals with schizophrenia, particularly for working memory and processing speed. Future research should establish directionality, examine if grip strength also predicts functional and physical health outcomes in schizophrenia, and determine whether interventions which improve muscular strength impact on cognitive and real-world functioning.

  7. Limitations of inclusive fitness.

    PubMed

    Allen, Benjamin; Nowak, Martin A; Wilson, Edward O

    2013-12-10

    Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed.

  8. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  9. Estimation of Standard Error of Regression Effects in Latent Regression Models Using Binder's Linearization. Research Report. ETS RR-07-09

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas

    2007-01-01

    Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…

  10. Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals

    ERIC Educational Resources Information Center

    Kara, Yusuf; Kamata, Akihito

    2017-01-01

    A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…

  11. Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.

    PubMed

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  12. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    PubMed Central

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786

  13. Application of General Regression Neural Network to the Prediction of LOD Change

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao

    2012-01-01

    Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.

  14. Modeling turbidity and flow at daily steps in karst using ARIMA/ARFIMA-GARCH error models

    NASA Astrophysics Data System (ADS)

    Massei, N.

    2013-12-01

    Hydrological and physico-chemical variations recorded at karst springs usually reflect highly non-linear processes and the corresponding time series are then very often also highly non-linear. Among others, turbidity, as an important parameter regarding water quality and management, is a very complex response of karst systems to rain events, involving direct transfer of particles from point-source recharge as well as resuspension of particles previously deposited and stored within the system. For those reasons, turbidity modeling has not been well taken in karst hydrological models so far. Most of the time, the modeling approaches would involve stochastic linear models such ARIMA-type models and their derivatives (ARMA, ARMAX, ARIMAX, ARFIMA...). Yet, linear models usually fail to represent well the whole (stochastic) process variability, and their residuals still contain useful information that can be used to either understand the whole variability or to enhance short-term predictability and forecasting. Model residuals are actually not i.i.d., which can be identified by the fact that squared residuals still present clear and significant serial correlation. Indeed, high (low) amplitudes are followed in time by high (low) amplitudes, which can be seen on residuals time series as periods of time during which amplitudes are higher (lower) then the mean amplitude. This is known as the ARCH effet (AutoRegressive Conditional Heteroskedasticity), and the corresponding non-linear process affecting residuals of a linear model can be modeled using ARCH or generalized ARCH (GARCH) non-linear modeling, which approaches are very well known in econometrics. Here we investigated the capability of ARIMA-GARCH error models to represent a ~20-yr daily turbidity time series recorded at a karst spring used for water supply of the city of Le Havre (Upper Normandy, France). ARIMA and ARFIMA models were used to represent the mean behavior of the time series and the residuals clearly appeared to present a pronounced ARCH effect, as confirmed by Ljung-Box and McLeod-Li tests. We then identified and fitted GARCH models to the residuals of ARIMA and ARFIMA models in order to model the conditional variance and volatility of the turbidity time series. The results eventually showed that serial correlation was succesfully removed in the last standardized residuals of the GARCH model, and hence that the ARIMA-GARCH error model appeared consistent for modeling such time series. The approach finally improved short-term (e.g a few steps-ahead) turbidity forecasting.

  15. Error-related processing following severe traumatic brain injury: An event-related functional magnetic resonance imaging (fMRI) study

    PubMed Central

    Sozda, Christopher N.; Larson, Michael J.; Kaufman, David A.S.; Schmalfuss, Ilona M.; Perlstein, William M.

    2011-01-01

    Continuous monitoring of one’s performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI. PMID:21756946

  16. Improving medium-range ensemble streamflow forecasts through statistical post-processing

    NASA Astrophysics Data System (ADS)

    Mendoza, Pablo; Wood, Andy; Clark, Elizabeth; Nijssen, Bart; Clark, Martyn; Ramos, Maria-Helena; Nowak, Kenneth; Arnold, Jeffrey

    2017-04-01

    Probabilistic hydrologic forecasts are a powerful source of information for decision-making in water resources operations. A common approach is the hydrologic model-based generation of streamflow forecast ensembles, which can be implemented to account for different sources of uncertainties - e.g., from initial hydrologic conditions (IHCs), weather forecasts, and hydrologic model structure and parameters. In practice, hydrologic ensemble forecasts typically have biases and spread errors stemming from errors in the aforementioned elements, resulting in a degradation of probabilistic properties. In this work, we compare several statistical post-processing techniques applied to medium-range ensemble streamflow forecasts obtained with the System for Hydromet Applications, Research and Prediction (SHARP). SHARP is a fully automated prediction system for the assessment and demonstration of short-term to seasonal streamflow forecasting applications, developed by the National Center for Atmospheric Research, University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. The suite of post-processing techniques includes linear blending, quantile mapping, extended logistic regression, quantile regression, ensemble analogs, and the generalized linear model post-processor (GLMPP). We assess and compare these techniques using multi-year hindcasts in several river basins in the western US. This presentation discusses preliminary findings about the effectiveness of the techniques for improving probabilistic skill, reliability, discrimination, sharpness and resolution.

  17. Error-related processing following severe traumatic brain injury: an event-related functional magnetic resonance imaging (fMRI) study.

    PubMed

    Sozda, Christopher N; Larson, Michael J; Kaufman, David A S; Schmalfuss, Ilona M; Perlstein, William M

    2011-10-01

    Continuous monitoring of one's performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    NASA Astrophysics Data System (ADS)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The applicability and advantages are shown in a synthetic example. Therefor, we consider a contaminant source, posing a threat on a drinking water well in an aquifer. Furthermore, we assume uncertainty in geostatistical parameters, boundary conditions and hydraulic gradient. The two mentioned measures evaluate the sensitivity of (1) general prediction confidence and (2) exceedance probability of a legal regulatory threshold value on sampling locations.

  19. A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Leonov, Arkady I.

    2002-01-01

    The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.

  20. General job stress: a unidimensional measure and its non-linear relations with outcome variables.

    PubMed

    Yankelevich, Maya; Broadfoot, Alison; Gillespie, Jennifer Z; Gillespie, Michael A; Guidroz, Ashley

    2012-04-01

    This article aims to examine the non-linear relations between a general measure of job stress [Stress in General (SIG)] and two outcome variables: intentions to quit and job satisfaction. In so doing, we also re-examine the factor structure of the SIG and determine that, as a two-factor scale, it obscures non-linear relations with outcomes. Thus, in this research, we not only test for non-linear relations between stress and outcome variables but also present an updated version of the SIG scale. Using two distinct samples of working adults (sample 1, N = 589; sample 2, N = 4322), results indicate that a more parsimonious eight-item SIG has better model-data fit than the 15-item two-factor SIG and that the eight-item SIG has non-linear relations with job satisfaction and intentions to quit. Specifically, the revised SIG has an inverted curvilinear J-shaped relation with job satisfaction such that job satisfaction drops precipitously after a certain level of stress; the SIG has a J-shaped curvilinear relation with intentions to quit such that turnover intentions increase exponentially after a certain level of stress. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Range and azimuth resolution enhancement for 94 GHz real-beam radar

    NASA Astrophysics Data System (ADS)

    Liu, Guoqing; Yang, Ken; Sykora, Brian; Salha, Imad

    2008-04-01

    In this paper, two-dimensional (2D) (range and azimuth) resolution enhancement is investigated for millimeter wave (mmW) real-beam radar (RBR) with linear or non-linear antenna scan in the azimuth dimension. We design a new architecture of super resolution processing, in which a dual-mode approach is used for defining region of interest for 2D resolution enhancement and a combined approach is deployed for obtaining accurate location and amplitude estimations of targets within the region of interest. To achieve 2D resolution enhancement, we first adopt the Capon Beamformer (CB) approach (also known as the minimum variance method (MVM)) to enhance range resolution. A generalized CB (GCB) approach is then applied to azimuth dimension for azimuth resolution enhancement. The GCB approach does not rely on whether the azimuth sampling is even or not and thus can be used in both linear and non-linear antenna scanning modes. The effectiveness of the resolution enhancement is demonstrated by using both simulation and test data. The results of using a 94 GHz real-beam frequency modulation continuous wave (FMCW) radar data show that the overall image quality is significantly improved per visual evaluation and comparison with respect to the original real-beam radar image.

  2. Nonlinear gyrotropic motion of skyrmion in a magnetic nanodisk

    NASA Astrophysics Data System (ADS)

    Chen, Yi-fu; Li, Zhi-xiong; Zhou, Zhen-wei; Xia, Qing-lin; Nie, Yao-zhuang; Guo, Guang-hua

    2018-07-01

    We study the nonlinear gyrotropic motion of a magnetic skyrmion in a nanodisk by means of micromagnetic simulations. The skyrmion is driven by a linearly polarized harmonic field with the frequency of counterclockwise gyrotropic mode. It is found that the motion of the skyrmion displays different patterns with increasing field amplitude. In the linear regime of weak driving field, the skyrmion performs a single counterclockwise gyrotropic motion. The guiding center of the skyrmion moves along a helical line from the centre of the nanodisk to a stable circular orbit. The stable orbital radius increases linearly with the field amplitude. When the driving field is larger than a critical value, the skyrmion exhibits complex nonlinear motion. With the advance of time, the motion trajectory of the skyrmion goes through a series of evolution process, from a single circular motion to a bird nest-like and a flower-like trajectory and finally, to a gear-like steady-state motion. The frequency spectra show that except the counterclockwise gyrotropic mode, the clockwise gyrotropic mode is also nonlinearly excited and its amplitude increases with time. The complex motion trajectory of the skyrmion is the result of superposition of the two gyrotropic motions with changing amplitude. Both the linear and nonlinear gyrotropic motions of the skyrmion can be well described by a generalized Thiele's equation of motion.

  3. Evaluating abundance and trends in a Hawaiian avian community using state-space analysis

    USGS Publications Warehouse

    Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.

    2016-01-01

    Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.

  4. Design of Linear-Quadratic-Regulator for a CSTR process

    NASA Astrophysics Data System (ADS)

    Meghna, P. R.; Saranya, V.; Jaganatha Pandian, B.

    2017-11-01

    This paper aims at creating a Linear Quadratic Regulator (LQR) for a Continuous Stirred Tank Reactor (CSTR). A CSTR is a common process used in chemical industries. It is a highly non-linear system. Therefore, in order to create the gain feedback controller, the model is linearized. The controller is designed for the linearized model and the concentration and volume of the liquid in the reactor are kept at a constant value as required.

  5. Music preferences with hearing aids: effects of signal properties, compression settings, and listener characteristics.

    PubMed

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2014-01-01

    Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast, multichannel WDRC often leads to poor music quality, whereas linear processing or slow WDRC are generally preferred. Furthermore, the effect of WDRC is more important for music preferences than music-industry CL applied to signals before the hearing-aid input stage. Variability in hearing-aid users' perceptions of music quality may be partially explained by frequency resolution abilities.

  6. Critical time scales for advection-diffusion-reaction processes

    NASA Astrophysics Data System (ADS)

    Ellery, Adam J.; Simpson, Matthew J.; McCue, Scott W.; Baker, Ruth E.

    2012-04-01

    The concept of local accumulation time (LAT) was introduced by Berezhkovskii and co-workers to give a finite measure of the time required for the transient solution of a reaction-diffusion equation to approach the steady-state solution [A. M. Berezhkovskii, C. Sample, and S. Y. Shvartsman, Biophys. J.BIOJAU0006-349510.1016/j.bpj.2010.07.045 99, L59 (2010); A. M. Berezhkovskii, C. Sample, and S. Y. Shvartsman, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.83.051906 83, 051906 (2011)]. Such a measure is referred to as a critical time. Here, we show that LAT is, in fact, identical to the concept of mean action time (MAT) that was first introduced by McNabb [A. McNabb and G. C. Wake, IMA J. Appl. Math.IJAMDM0272-496010.1093/imamat/47.2.193 47, 193 (1991)]. Although McNabb's initial argument was motivated by considering the mean particle lifetime (MPLT) for a linear death process, he applied the ideas to study diffusion. We extend the work of these authors by deriving expressions for the MAT for a general one-dimensional linear advection-diffusion-reaction problem. Using a combination of continuum and discrete approaches, we show that MAT and MPLT are equivalent for certain uniform-to-uniform transitions; these results provide a practical interpretation for MAT by directly linking the stochastic microscopic processes to a meaningful macroscopic time scale. We find that for more general transitions, the equivalence between MAT and MPLT does not hold. Unlike other critical time definitions, we show that it is possible to evaluate the MAT without solving the underlying partial differential equation (pde). This makes MAT a simple and attractive quantity for practical situations. Finally, our work explores the accuracy of certain approximations derived using MAT, showing that useful approximations for nonlinear kinetic processes can be obtained, again without treating the governing pde directly.

  7. What motivates people to participate more in community-based coalitions?

    PubMed

    Wells, Rebecca; Ward, Ann J; Feinberg, Mark; Alexander, Jeffrey A

    2008-09-01

    The purpose of this study was to identify potential opportunities for improving member participation in community-based coalitions. We hypothesized that opportunities for influence and process competence would each foster higher levels of individual member participation. We tested these hypotheses in a sample of 818 members within 79 youth-oriented coalitions. Opportunities for influence were measured as members' perceptions of an inclusive board leadership style and members' reported committee roles. Coalition process competence was measured through member perceptions of strategic board directedness and meeting effectiveness. Members reported three types of participation within meetings as well as how much time they devoted to coalition business beyond meetings. Generalized linear models accommodated clustering of individuals within coalitions. Opportunities for influence were associated with individuals' participation both within and beyond meetings. Coalition process competence was not associated with participation. These results suggest that leadership inclusivity rather than process competence may best facilitate member participation.

  8. Physiological Aldosterone Concentrations Are Associated with Alterations of Lipid Metabolism: Observations from the General Population.

    PubMed

    Hannich, M; Wallaschofski, H; Nauck, M; Reincke, M; Adolf, C; Völzke, H; Rettig, R; Hannemann, A

    2018-01-01

    Aldosterone and high-density lipoprotein cholesterol (HDL-C) are involved in many pathophysiological processes that contribute to the development of cardiovascular diseases. Previously, associations between the concentrations of aldosterone and certain components of the lipid metabolism in the peripheral circulation were suggested, but data from the general population is sparse. We therefore aimed to assess the associations between aldosterone and HDL-C, low-density lipoprotein cholesterol (LDL-C), total cholesterol, triglycerides, or non-HDL-C in the general adult population. Data from 793 men and 938 women aged 25-85 years who participated in the first follow-up of the Study of Health in Pomerania were obtained. The associations of aldosterone with serum lipid concentrations were assessed in multivariable linear regression models adjusted for sex, age, body mass index (BMI), estimated glomerular filtration rate (eGFR), and HbA1c. The linear regression models showed statistically significant positive associations of aldosterone with LDL-C ( β -coefficient = 0.022, standard error = 0.010, p = 0.03) and non-HDL-C ( β -coefficient = 0.023, standard error = 0.009, p = 0.01) as well as an inverse association of aldosterone with HDL-C ( β -coefficient = -0.022, standard error = 0.011, p = 0.04). The present data show that plasma aldosterone is positively associated with LDL-C and non-HDL-C and inversely associated with HDL-C in the general population. Our data thus suggests that aldosterone concentrations within the physiological range may be related to alterations of lipid metabolism.

  9. Discrete integration of continuous Kalman filtering equations for time invariant second-order structural systems

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Belvin, W. Keith

    1990-01-01

    A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.

  10. Second-order discrete Kalman filtering equations for control-structure interaction simulations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Belvin, W. Keith; Alvin, Kenneth F.

    1991-01-01

    A general form for the first-order representation of the continuous, second-order linear structural dynamics equations is introduced in order to derive a corresponding form of first-order Kalman filtering equations (KFE). Time integration of the resulting first-order KFE is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete KFE involving only symmetric, N x N solution matrix.

  11. Comparison of Linear and Nonlinear Processing with Acoustic Vector Sensors

    DTIC Science & Technology

    2008-09-01

    can write the general form of the time invariant vector sensor planewave response as mik rm mv V e = i , (2.21) where mik rxm xmv V e = i , mik rym...ymv V e = i , and mik rzm zmv V e = i . Using the vector geometry defined, the response of each component is defined by cosxm mV V θ= , sin...velocity values relative to the other by the acoustic impedance, ρc, according to Equation (2.19) , e.g. , mik r mpm pm pm Pv V e V cρ = =i

  12. The Shock and Vibration Bulletin: Proceedings on the Symposium on ShocK and Vibration (52nd) Held in New Orleans, Louisiana on 26-28 October 1981. Part 3. Environmental Testing and Simulation, Flight Environments.

    DTIC Science & Technology

    1982-05-01

    signal generation history can then be generated. These opera- processes generally consisted of recording tions work quite well electromagnetic ex...fninduerointe gualinicharactestics I PLOTTER MULTI CHANNEL TAPE RECORDER TEST ITEM 71T RESPONSE MOTIO ANALOG SIGNAL SHOCK CONDITIONING SPECTRUM ANALYZER...of TM to the EM. The exciter displacement producing a drive signal with excessive actua- drive signal is generated fron the linear sm tor stroke

  13. General Results in Optimal Control of Discrete-Time Nonlinear Stochastic Systems

    DTIC Science & Technology

    1988-01-01

    P. J. McLane, "Optimal Stochastic Control of Linear System. with State- and Control-Dependent Distur- bances," ZEEE Trans. 4uto. Contr., Vol. 16, No...Vol. 45, No. 1, pp. 359-362, 1987 (9] R. R. Mohler and W. J. Kolodziej, "An Overview of Stochastic Bilinear Control Processes," ZEEE Trans. Syst...34 J. of Math. anal. App.:, Vol. 47, pp. 156-161, 1974 [14) E. Yaz, "A Control Scheme for a Class of Discrete Nonlinear Stochastic Systems," ZEEE Trans

  14. Linearization instability for generic gravity in AdS spacetime

    NASA Astrophysics Data System (ADS)

    Altas, Emel; Tekin, Bayram

    2018-01-01

    In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.

  15. Dynamics of convulsive seizure termination and postictal generalized EEG suppression

    PubMed Central

    Bauer, Prisca R.; Thijs, Roland D.; Lamberts, Robert J.; Velis, Demetrios N.; Visser, Gerhard H.; Tolner, Else A.; Sander, Josemir W.; Lopes da Silva, Fernando H.; Kalitzin, Stiliyan N.

    2017-01-01

    Abstract It is not fully understood how seizures terminate and why some seizures are followed by a period of complete brain activity suppression, postictal generalized EEG suppression. This is clinically relevant as there is a potential association between postictal generalized EEG suppression, cardiorespiratory arrest and sudden death following a seizure. We combined human encephalographic seizure data with data of a computational model of seizures to elucidate the neuronal network dynamics underlying seizure termination and the postictal generalized EEG suppression state. A multi-unit computational neural mass model of epileptic seizure termination and postictal recovery was developed. The model provided three predictions that were validated in EEG recordings of 48 convulsive seizures from 48 subjects with refractory focal epilepsy (20 females, age range 15–61 years). The duration of ictal and postictal generalized EEG suppression periods in human EEG followed a gamma probability distribution indicative of a deterministic process (shape parameter 2.6 and 1.5, respectively) as predicted by the model. In the model and in humans, the time between two clonic bursts increased exponentially from the start of the clonic phase of the seizure. The terminal interclonic interval, calculated using the projected terminal value of the log-linear fit of the clonic frequency decrease was correlated with the presence and duration of postictal suppression. The projected terminal interclonic interval explained 41% of the variation in postictal generalized EEG suppression duration (P < 0.02). Conversely, postictal generalized EEG suppression duration explained 34% of the variation in the last interclonic interval duration. Our findings suggest that postictal generalized EEG suppression is a separate brain state and that seizure termination is a plastic and autonomous process, reflected in increased duration of interclonic intervals that determine the duration of postictal generalized EEG suppression. PMID:28073789

  16. Process Predictors of the Outcome of Group Drug Counseling

    PubMed Central

    Crits-Christoph, Paul; Johnson, Jennifer E.; Gibbons, Mary Beth Connolly; Gallop, Robert

    2012-01-01

    Objective This study examined the relation of process variables to the outcome of group drug counseling, a commonly used community treatment, for cocaine dependence. Method Videotaped group drug counseling sessions from 440 adult patients (23% female, 41% minority) were rated for member alliance, group cohesion, participation, self-disclosure, positive and non-positive feedback and advice, during the 6-month treatment of cocaine dependence. Average, session-level, and slopes of process scores were evaluated. Primary outcomes were monthly cocaine use (days using out of 30), next session cocaine use, and duration of sustained abstinence from cocaine. Secondary outcomes were endorsement of 12-step philosophy and beliefs about substance abuse. Results More positive alliances (with counselor) were associated with reductions in days using cocaine per month and next-session cocaine use, and increases in endorsement of 12-step philosophy. Patient self-disclosure about the past and degree of participation in the group were generally not predictive of group drug counseling outcomes. More advice from counselor and other group members were consistently associated with poorer outcomes in all categories. Individual differences in changes in process variables over time (linear slopes) were generally not predictive of treatment outcomes. Conclusions Some group behaviors widely believed to be associated with outcome, such as self-disclosure and participation, were not generally predictive of outcomes of group drug counseling, but alliance with the group counselor was positively associated, and advice giving negatively associated, with the outcome of treatments for cocaine dependence. PMID:23106760

  17. Nice Guys Finish Fast and Bad Guys Finish Last: Facilitatory vs. Inhibitory Interaction in Parallel Systems

    PubMed Central

    Eidels, Ami; Houpt, Joseph W.; Altieri, Nicholas; Pei, Lei; Townsend, James T.

    2011-01-01

    Systems Factorial Technology is a powerful framework for investigating the fundamental properties of human information processing such as architecture (i.e., serial or parallel processing) and capacity (how processing efficiency is affected by increased workload). The Survivor Interaction Contrast (SIC) and the Capacity Coefficient are effective measures in determining these underlying properties, based on response-time data. Each of the different architectures, under the assumption of independent processing, predicts a specific form of the SIC along with some range of capacity. In this study, we explored SIC predictions of discrete-state (Markov process) and continuous-state (Linear Dynamic) models that allow for certain types of cross-channel interaction. The interaction can be facilitatory or inhibitory: one channel can either facilitate, or slow down processing in its counterpart. Despite the relative generality of these models, the combination of the architecture-oriented plus the capacity oriented analyses provide for precise identification of the underlying system. PMID:21516183

  18. Nice Guys Finish Fast and Bad Guys Finish Last: Facilitatory vs. Inhibitory Interaction in Parallel Systems.

    PubMed

    Eidels, Ami; Houpt, Joseph W; Altieri, Nicholas; Pei, Lei; Townsend, James T

    2011-04-01

    Systems Factorial Technology is a powerful framework for investigating the fundamental properties of human information processing such as architecture (i.e., serial or parallel processing) and capacity (how processing efficiency is affected by increased workload). The Survivor Interaction Contrast (SIC) and the Capacity Coefficient are effective measures in determining these underlying properties, based on response-time data. Each of the different architectures, under the assumption of independent processing, predicts a specific form of the SIC along with some range of capacity. In this study, we explored SIC predictions of discrete-state (Markov process) and continuous-state (Linear Dynamic) models that allow for certain types of cross-channel interaction. The interaction can be facilitatory or inhibitory: one channel can either facilitate, or slow down processing in its counterpart. Despite the relative generality of these models, the combination of the architecture-oriented plus the capacity oriented analyses provide for precise identification of the underlying system.

  19. Recent work on material interface reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosso, S.J.; Swartz, B.K.

    1997-12-31

    For the last 15 years, many Eulerian codes have relied on a series of piecewise linear interface reconstruction algorithms developed by David Youngs. In a typical Youngs` method, the material interfaces were reconstructed based upon nearly cell values of volume fractions of each material. The interfaces were locally represented by linear segments in two dimensions and by pieces of planes in three dimensions. The first step in such reconstruction was to locally approximate an interface normal. In Youngs` 3D method, a local gradient of a cell-volume-fraction function was estimated and taken to be the local interface normal. A linear interfacemore » was moved perpendicular to the now known normal until the mass behind it matched the material volume fraction for the cell in question. But for distorted or nonorthogonal meshes, the gradient normal estimate didn`t accurately match that of linear material interfaces. Moreover, curved material interfaces were also poorly represented. The authors will present some recent work in the computation of more accurate interface normals, without necessarily increasing stencil size. Their estimate of the normal is made using an iterative process that, given mass fractions for nearby cells of known but arbitrary variable density, converges in 3 or 4 passes in practice (and quadratically--like Newton`s method--in principle). The method reproduces a linear interface in both orthogonal and nonorthogonal meshes. The local linear approximation is generally 2nd-order accurate, with a 1st-order accurate normal for curved interfaces in both two and three dimensional polyhedral meshes. Recent work demonstrating the interface reconstruction for curved surfaces will /be discussed.« less

  20. Application of Design Methodologies for Feedback Compensation Associated with Linear Systems

    NASA Technical Reports Server (NTRS)

    Smith, Monty J.

    1996-01-01

    The work that follows is concerned with the application of design methodologies for feedback compensation associated with linear systems. In general, the intent is to provide a well behaved closed loop system in terms of stability and robustness (internal signals remain bounded with a certain amount of uncertainty) and simultaneously achieve an acceptable level of performance. The approach here has been to convert the closed loop system and control synthesis problem into the interpolation setting. The interpolation formulation then serves as our mathematical representation of the design process. Lifting techniques have been used to solve the corresponding interpolation and control synthesis problems. Several applications using this multiobjective design methodology have been included to show the effectiveness of these techniques. In particular, the mixed H 2-H performance criteria with algorithm has been used on several examples including an F-18 HARV (High Angle of Attack Research Vehicle) for sensitivity performance.

  1. Fusion yield: Guderley model and Tsallis statistics

    NASA Astrophysics Data System (ADS)

    Haubold, H. J.; Kumar, D.

    2011-02-01

    The reaction rate probability integral is extended from Maxwell-Boltzmann approach to a more general approach by using the pathway model introduced by Mathai in 2005 (A pathway to matrix-variate gamma and normal densities. Linear Algebr. Appl. 396, 317-328). The extended thermonuclear reaction rate is obtained in the closed form via a Meijer's G-function and the so-obtained G-function is represented as a solution of a homogeneous linear differential equation. A physical model for the hydrodynamical process in a fusion plasma-compressed and laser-driven spherical shock wave is used for evaluating the fusion energy integral by integrating the extended thermonuclear reaction rate integral over the temperature. The result obtained is compared with the standard fusion yield obtained by Haubold and John in 1981 (Analytical representation of the thermonuclear reaction rate and fusion energy production in a spherical plasma shock wave. Plasma Phys. 23, 399-411). An interpretation for the pathway parameter is also given.

  2. Fluorescent biopsy of biological tissues in differentiation of benign and malignant tumors of prostate

    NASA Astrophysics Data System (ADS)

    Trifoniuk, L. I.; Ushenko, Yu. A.; Sidor, M. I.; Minzer, O. P.; Gritsyuk, M. V.; Novakovskaya, O. Y.

    2014-08-01

    The work consists of investigation results of diagnostic efficiency of a new azimuthally stable Mueller-matrix method of analysis of laser autofluorescence coordinate distributions of biological tissues histological sections. A new model of generalized optical anisotropy of biological tissues protein networks is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase anisotropy (linear birefringence and optical activity) and linear (circular) dichroism is taken into account. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistic analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the 1st to the 4th order) of differentiation of histological sections of uterus wall tumor - group 1 (dysplasia) and group 2 (adenocarcinoma) are estimated.

  3. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    PubMed

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  4. Hyperentanglement concentration for polarization-spatial-time-bin hyperentangled photon systems with linear optics

    NASA Astrophysics Data System (ADS)

    Wang, Hong; Ren, Bao-Cang; Alzahrani, Faris; Hobiny, Aatef; Deng, Fu-Guo

    2017-10-01

    Hyperentanglement has significant applications in quantum information processing. Here we present an efficient hyperentanglement concentration protocol (hyper-ECP) for partially hyperentangled Bell states simultaneously entangled in polarization, spatial-mode and time-bin degrees of freedom (DOFs) with the parameter-splitting method, where the parameters of the partially hyperentangled Bell states are known to the remote parties. In this hyper-ECP, only one remote party is required to perform some local operations on the three DOFs of a photon, only the linear optical elements are considered, and the success probability can achieve the maximal value. Our hyper-ECP can be easily generalized to concentrate the N-photon partially hyperentangled Greenberger-Horne-Zeilinger states with known parameters, where the multiple DOFs have largely improved the channel capacity of long-distance quantum communication. All of these make our hyper-ECP more practical and useful in high-capacity long-distance quantum communication.

  5. Mathematical simulation of sound propagation in a flow channel with impedance walls

    NASA Astrophysics Data System (ADS)

    Osipov, A. A.; Reent, K. S.

    2012-07-01

    The paper considers the specifics of calculating tonal sound propagating in a flow channel with an installed sound-absorbing device. The calculation is performed on the basis of numerical integrating on linearized nonstationary Euler equations using a code developed by the authors based on the so-called discontinuous Galerkin method. Using the linear theory of small perturbations, the effect of the sound-absorbing lining of the channel walls is described with the modified value of acoustic impedance proposed by the authors, for which, under flow channel conditions, the traditional classification of the active and reactive types of lining in terms of the real and imaginary impedance values, respectively, remains valid. To stabilize the computation process, a generalized impedance boundary condition is proposed in which, in addition to the impedance value itself, some additional parameters are introduced characterizing certain fictitious properties of inertia and elasticity of the impedance surface.

  6. Supra-Nanoparticle Functional Assemblies through Programmable Stacking

    DOE PAGES

    Tian, Cheng; Cordeiro, Marco Aurelio L.; Lhermitte, Julien; ...

    2017-05-25

    The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. We report a general method of assembling nanoparticles in a linear “pillar” morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization.more » Furthermore, by controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.« less

  7. Supra-Nanoparticle Functional Assemblies through Programmable Stacking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Cheng; Cordeiro, Marco Aurelio L.; Lhermitte, Julien

    The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. We report a general method of assembling nanoparticles in a linear “pillar” morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization.more » Furthermore, by controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.« less

  8. Supra-Nanoparticle Functional Assemblies through Programmable Stacking.

    PubMed

    Tian, Cheng; Cordeiro, Marco Aurelio L; Lhermitte, Julien; Xin, Huolin L; Shani, Lior; Liu, Mingzhao; Ma, Chunli; Yeshurun, Yosef; DiMarzio, Donald; Gang, Oleg

    2017-07-25

    The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. Here, we report a general method of assembling nanoparticles in a linear "pillar" morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization. By controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.

  9. Directionality volatility in electroencephalogram time series

    NASA Astrophysics Data System (ADS)

    Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.

    2016-06-01

    We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.

  10. Propagation and Linear Mode Conversion of Magnetosonic and Electromagnetic Ion Cyclotron Waves in the Radiation Belts

    NASA Astrophysics Data System (ADS)

    Horne, R. B.; Yoshizumi, M.

    2017-12-01

    Magnetosonic waves and electromagnetic ion cyclotron (EMIC) waves are important for electron acceleration and loss from the radiation belts. It is generally understood that these waves are generated by unstable ion distributions that form during geomagnetically disturbed times. Here we show that magnetosonic waves could be a source of EMIC waves as a result of propagation and a process of linear mode conversion. The converse is also possible. We present ray tracing to show how magnetosonic (EMIC) waves launched with large (small) wave normal angles can reach a location where the wave normal angle is zero and the wave frequency equals the so-called cross-over frequency whereupon energy can be converted from one mode to another without attenuation. While EMIC waves could be a source of magnetosonic waves below the cross-over frequency magnetosonic waves could be a source of hydrogen band waves but not helium band waves.

  11. System of polarization correlometry of polycrystalline layers of urine in the differentiation stage of diabetes

    NASA Astrophysics Data System (ADS)

    Ushenko, Yu. O.; Pashkovskaya, N. V.; Marchuk, Y. F.; Dubolazov, O. V.; Savich, V. O.

    2015-08-01

    The work consists of investigation results of diagnostic efficiency of a new azimuthally stable Muellermatrix method of analysis of laser autofluorescence coordinate distributions of biological liquid layers. A new model of generalized optical anisotropy of biological tissues protein networks is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase anisotropy (linear birefringence and optical activity) and linear (circular) dichroism is taken into account. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistic analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the 1st to the 4th order) of differentiation of human urine polycrystalline layers for the sake of diagnosing and differentiating cholelithiasis with underlying chronic cholecystitis (group 1) and diabetes mellitus of degree II (group 2) are estimated.

  12. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  13. Mueller-matrix of laser-induced autofluorescence of polycrystalline films of dried peritoneal fluid in diagnostics of endometriosis

    NASA Astrophysics Data System (ADS)

    Ushenko, Yuriy A.; Koval, Galina D.; Ushenko, Alexander G.; Dubolazov, Olexander V.; Ushenko, Vladimir A.; Novakovskaia, Olga Yu.

    2016-07-01

    This research presents investigation results of the diagnostic efficiency of an azimuthally stable Mueller-matrix method of analysis of laser autofluorescence of polycrystalline films of dried uterine cavity peritoneal fluid. A model of the generalized optical anisotropy of films of dried peritoneal fluid is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase (linear and circular birefringence) and amplitude (linear and circular dichroism) anisotropies is taken into consideration. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistical analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the first to the fourth order) of differentiation of polycrystalline films of dried peritoneal fluid, group 1 (healthy donors) and group 2 (uterus endometriosis patients), are determined.

  14. Methods and means of Fourier-Stokes polarimetry and the spatial-frequency filtering of phase anisotropy manifestations in endometriosis diagnostics

    NASA Astrophysics Data System (ADS)

    Ushenko, A. G.; Dubolazov, O. V.; Ushenko, Vladimir A.; Ushenko, Yu. A.; Sakhnovskiy, M. Yu.; Prydiy, O. G.; Lakusta, I. I.; Novakovskaya, O. Yu.; Melenko, S. R.

    2016-12-01

    This research presents investigation results of diagnostic efficiency of a new azimuthally stable Mueller-matrix method of laser autofluorescence coordinate distributions analysis of dried polycrystalline films of uterine cavity peritoneal fluid. A new model of generalized optical anisotropy of biological tissues protein networks is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase anisotropy (linear birefringence and optical activity) and linear (circular) dichroism is taken into account. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistic analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the 1st to the 4th order) of differentiation of dried polycrystalline films of peritoneal fluid - group 1 (healthy donors) and group 2 (uterus endometriosis patients) are estimated.

  15. Archimedes' law explains penetration of solids into granular media.

    PubMed

    Kang, Wenting; Feng, Yajie; Liu, Caishan; Blumenfeld, Raphael

    2018-03-16

    Understanding the response of granular matter to intrusion of solid objects is key to modelling many aspects of behaviour of granular matter, including plastic flow. Here we report a general model for such a quasistatic process. Using a range of experiments, we first show that the relation between the penetration depth and the force resisting it, transiently nonlinear and then linear, is scalable to a universal form. We show that the gradient of the steady-state part, K ϕ , depends only on the medium's internal friction angle, ϕ, and that it is nonlinear in μ = tan ϕ, in contrast to an existing conjecture. We further show that the intrusion of any convex solid shape satisfies a modified Archimedes' law and use this to: relate the zero-depth intercept of the linear part to K ϕ and the intruder's cross-section; explain the curve's nonlinear part in terms of the stagnant zone's development.

  16. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  17. Improving the CD linearity and proximity performance of photomasks written on the Sigma7500-II DUV laser writer through embedded OPC

    NASA Astrophysics Data System (ADS)

    Österberg, Anders; Ivansen, Lars; Beyerl, Angela; Newman, Tom; Bowhill, Amanda; Sahouria, Emile; Schulze, Steffen

    2007-10-01

    Optical proximity correction (OPC) is widely used in wafer lithography to produce a printed image that best matches the design intent while optimizing CD control. OPC software applies corrections to the mask pattern data, but in general it does not compensate for the mask writer and mask process characteristics. The Sigma7500-II deep-UV laser mask writer projects the image of a programmable spatial light modulator (SLM) using partially coherent optics similar to wafer steppers, and the optical proximity effects of the mask writer are in principle correctable with established OPC methods. To enhance mask patterning, an embedded OPC function, LinearityEqualize TM, has been developed for the Sigma7500- II that is transparent to the user and which does not degrade mask throughput. It employs a Calibre TM rule-based OPC engine from Mentor Graphics, selected for the computational speed necessary for mask run-time execution. A multinode cluster computer applies optimized table-based CD corrections to polygonized pattern data that is then fractured into an internal writer format for subsequent data processing. This embedded proximity correction flattens the linearity behavior for all linewidths and pitches, which targets to improve the CD uniformity on production photomasks. Printing results show that the CD linearity is reduced to below 5 nm for linewidths down to 200 nm, both for clear and dark and for isolated and dense features, and that sub-resolution assist features (SRAF) are reliably printed down to 120 nm. This reduction of proximity effects for main mask features and the extension of the practical resolution for SRAFs expands the application space of DUV laser mask writing.

  18. Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles

    PubMed Central

    Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631

  19. Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.

  20. Study on sampling of continuous linear system based on generalized Fourier transform

    NASA Astrophysics Data System (ADS)

    Li, Huiguang

    2003-09-01

    In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.

  1. A methodology for evaluation of parent-mutant competition using a generalized non-linear ecosystem model

    Treesearch

    Raymond L. Czaplewski

    1973-01-01

    A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...

  2. Thermal Density Functional Theory: Time-Dependent Linear Response and Approximate Functionals from the Fluctuation-Dissipation Theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron

    We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.

  3. Symposium on General Linear Model Approach to the Analysis of Experimental Data in Educational Research (Athens, Georgia, June 29-July 1, 1967). Final Report.

    ERIC Educational Resources Information Center

    Bashaw, W. L., Ed.; Findley, Warren G., Ed.

    This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…

  4. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  5. Thermal Density Functional Theory: Time-Dependent Linear Response and Approximate Functionals from the Fluctuation-Dissipation Theorem

    DOE PAGES

    Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron

    2016-06-08

    We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.

  6. Artificial Intelligence Methodologies in Flight Related Differential Game, Control and Optimization Problems

    DTIC Science & Technology

    1993-01-31

    28 Controllability and Observability ............................. .32 ’ Separation of Learning and Control ... ... 37 Linearization via... Linearization via Transformation of Coordinates and Nonlinear Fedlback . .1 Main Result ......... .............................. 13 Discussion...9 2.1 Basic Structure of a NLM........................ .󈧟 2.2 General Structure of NNLM .......................... .28 2.3 Linear System

  7. A Constrained Linear Estimator for Multiple Regression

    ERIC Educational Resources Information Center

    Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.

    2010-01-01

    "Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…

  8. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  9. Query construction, entropy, and generalization in neural-network models

    NASA Astrophysics Data System (ADS)

    Sollich, Peter

    1994-05-01

    We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.

  10. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  11. Signal processing methods for in-situ creep specimen monitoring

    NASA Astrophysics Data System (ADS)

    Guers, Manton J.; Tittmann, Bernhard R.

    2018-04-01

    Previous work investigated using guided waves for monitoring creep deformation during accelerated life testing. The basic objective was to relate observed changes in the time-of-flight to changes in the environmental temperature and specimen gage length. The work presented in this paper investigated several signal processing strategies for possible application in the in-situ monitoring system. Signal processing methods for both group velocity (wave-packet envelope) and phase velocity (peak tracking) time-of-flight were considered. Although the Analytic Envelope found via the Hilbert transform is commonly applied for group velocity measurements, erratic behavior in the indicated time-of-flight was observed when this technique was applied to the in-situ data. The peak tracking strategies tested had generally linear trends, and tracking local minima in the raw waveform ultimately showed the most consistent results.

  12. Juvenile Scleroderma

    MedlinePlus

    ... morphea, linear scleroderma, and scleroderma en coup de sabre. Each type can be subdivided further and some ... described for morphea. Linear scleroderma en coup de sabre is the term generally applied when children have ...

  13. How are learning strategies reflected in the eyes? Combining results from self-reports and eye-tracking.

    PubMed

    Catrysse, Leen; Gijbels, David; Donche, Vincent; De Maeyer, Sven; Lesterhuis, Marije; Van den Bossche, Piet

    2018-03-01

    Up until now, empirical studies in the Student Approaches to Learning field have mainly been focused on the use of self-report instruments, such as interviews and questionnaires, to uncover differences in students' general preferences towards learning strategies, but have focused less on the use of task-specific and online measures. This study aimed at extending current research on students' learning strategies by combining general and task-specific measurements of students' learning strategies using both offline and online measures. We want to clarify how students process learning contents and to what extent this is related to their self-report of learning strategies. Twenty students with different generic learning profiles (according to self-report questionnaires) read an expository text, while their eye movements were registered to answer questions on the content afterwards. Eye-tracking data were analysed with generalized linear mixed-effects models. The results indicate that students with an all-high profile, combining both deep and surface learning strategies, spend more time on rereading the text than students with an all-low profile, scoring low on both learning strategies. This study showed that we can use eye-tracking to distinguish very strategic students, characterized using cognitive processing and regulation strategies, from low strategic students, characterized by a lack of cognitive and regulation strategies. These students processed the expository text according to how they self-reported. © 2017 The British Psychological Society.

  14. General Slowing and Education Mediate Task Switching Performance Across the Life-Span

    PubMed Central

    Moretti, Luca; Semenza, Carlo; Vallesi, Antonino

    2018-01-01

    Objective: This study considered the potential role of both protective factors (cognitive reserve, CR) and adverse ones (general slowing) in modulating cognitive flexibility in the adult life-span. Method: Ninety-eight individuals performed a task-switching (TS) paradigm in which we adopted a manipulation concerning the timing between the cue and the target. Working memory demands were minimized by using transparent cues. Additionally, indices of cognitive integrity, depression, processing speed and different CR dimensions were collected and used in linear models accounting for TS performance under the different time constraints. Results: The main results showed similar mixing costs and higher switching costs in older adults, with an overall age-dependent effect of general slowing on these costs. The link between processing speed and TS performance was attenuated when participants had more time to prepare. Among the different CR indices, formal education only was associated with reduced switch costs under time pressure. Discussion: Even though CR is often operationalized as a unitary construct, the present research confirms the benefits of using tools designed to distinguish between different CR dimensions. Furthermore, our results provide empirical support to the assumption that processing speed influence on executive performance depends on time constraints. Finally, it is suggested that whether age differences appear in terms of switch or mixing costs depends on working memory demands (which were low in our tasks with transparent cues). PMID:29780341

  15. Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.

  16. Controllers, observers, and applications thereof

    NASA Technical Reports Server (NTRS)

    Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)

    2011-01-01

    Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.

  17. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  18. Generalized Clifford Algebras as Algebras in Suitable Symmetric Linear Gr-Categories

    NASA Astrophysics Data System (ADS)

    Cheng, Tao; Huang, Hua-Lin; Yang, Yuping

    2016-01-01

    By viewing Clifford algebras as algebras in some suitable symmetric Gr-categories, Albuquerque and Majid were able to give a new derivation of some well known results about Clifford algebras and to generalize them. Along the same line, Bulacu observed that Clifford algebras are weak Hopf algebras in the aforementioned categories and obtained other interesting properties. The aim of this paper is to study generalized Clifford algebras in a similar manner and extend the results of Albuquerque, Majid and Bulacu to the generalized setting. In particular, by taking full advantage of the gauge transformations in symmetric linear Gr-categories, we derive the decomposition theorem and provide categorical weak Hopf structures for generalized Clifford algebras in a conceptual and simpler manner.

  19. Scaling of postinjection-induced seismicity: An approach to assess hydraulic fracturing related processes

    NASA Astrophysics Data System (ADS)

    Johann, Lisa; Dinske, Carsten; Shapiro, Serge

    2017-04-01

    Fluid injections into unconventional reservoirs have become a standard for the enhancement of fluid-mobility parameters. Microseismic activity during and after the injection can be frequently directly associated with subsurface fluid injections. Previous studies demonstrate that postinjection-induced seismicity has two important characteristics: On the one hand, the triggering front, which corresponds to early and distant events and envelops farthest induced events. On the other hand, the back front, which describes the lower boundary of the seismic cloud and envelops the aseismic domain evolving around the source after the injection stop. A lot of research has been conducted in recent years to understand seismicity-related processes. For this work, we follow the assumption that the diffusion of pore-fluid pressure is the dominant triggering mechanism. Based on Terzaghi's concept of an effective normal stress, the injection of fluids leads to increasing pressures which in turn reduce the effective normal stress and lead to sliding along pre-existing critically stressed and favourably oriented fractures and cracks. However, in many situations, spatio-temporal signatures of induced events are captured by a rather non-linear process of pore-fluid pressure diffusion, where the hydraulic diffusivity becomes pressure-dependent. This is for example the case during hydraulic fracturing where hydraulic transport properties are significantly enhanced. For a better understanding of processes related to postinjection-induced seismicity, we analytically describe the temporal behaviour of triggering and back fronts. We introduce a scaling law which shows that postinjection-induced events are sensitive to the degree of non-linearity and to the Euclidean dimension of the seismic cloud (see Johann et al., 2016, JGR). To validate the theory, we implement comprehensive modelling of non-linear pore-fluid pressure diffusion in 3D. We solve numerically for the non-linear equation of diffusion with a power-law dependent hydraulic diffusivity on pressure and generate catalogues of synthetic seismicity. We study spatio-temporal features of the seismic clouds and compare the results to theoretical values predicted by the novel scaling law. Subsequently, we apply the scaling relation to real hydraulic fracturing and Enhanced Geothermal System data. Our results show that the derived scaling relations well describe synthetic and real data. Thus, the methodology can be used to obtain hydraulic reservoir properties and can contribute significantly to a general understanding of injection related processes as well as to hazard assessment.

  20. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  1. fMRI paradigm designing and post-processing tools

    PubMed Central

    James, Jija S; Rajesh, PG; Chandran, Anuvitha VS; Kesavadas, Chandrasekharan

    2014-01-01

    In this article, we first review some aspects of functional magnetic resonance imaging (fMRI) paradigm designing for major cognitive functions by using stimulus delivery systems like Cogent, E-Prime, Presentation, etc., along with their technical aspects. We also review the stimulus presentation possibilities (block, event-related) for visual or auditory paradigms and their advantage in both clinical and research setting. The second part mainly focus on various fMRI data post-processing tools such as Statistical Parametric Mapping (SPM) and Brain Voyager, and discuss the particulars of various preprocessing steps involved (realignment, co-registration, normalization, smoothing) in these software and also the statistical analysis principles of General Linear Modeling for final interpretation of a functional activation result. PMID:24851001

  2. Monitoring temperatures in coal conversion and combustion processes via ultrasound

    NASA Astrophysics Data System (ADS)

    Gopalsami, N.; Raptis, A. C.; Mulcahey, T. P.

    1980-02-01

    The state of the art of instrumentation for monitoring temperatures in coal conversion and combustion systems is examined. The instrumentation types studied include thermocouples, radiation pyrometers, and acoustical thermometers. The capabilities and limitations of each type are reviewed. A feasibility study of the ultrasonic thermometry is described. A mathematical model of a pulse-echo ultrasonic temperature measurement system is developed using linear system theory. The mathematical model lends itself to the adaptation of generalized correlation techniques for the estimation of propagation delays. Computer simulations are made to test the efficacy of the signal processing techniques for noise-free as well as noisy signals. Based on the theoretical study, acoustic techniques to measure temperature in reactors and combustors are feasible.

  3. Flow assignment model for quantitative analysis of diverting bulk freight from road to railway

    PubMed Central

    Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian

    2017-01-01

    Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536

  4. The Six Fundamental Characteristics of Chaos and Their Clinical Relevance to Psychiatry: a New Hypothesis for the Origin of Psychosis

    NASA Astrophysics Data System (ADS)

    Schmid, Gary Bruno

    Underlying idea: A new hypothesis about how the mental state of psychosis may arise in the brain as a "linear" information processing pathology is briefly introduced. This hypothesis is proposed in the context of a complementary approach to psychiatry founded in the logical paradigm of chaos theory. To best understand the relation between chaos theory and psychiatry, the semantic structure of chaos theory is analyzed with the help of six general, and six specific, fundamental characteristics which can be directly inferred from empirical observations on chaotic systems. This enables a mathematically and physically stringent perspective on psychological phenomena which until now could only be grasped intuitively: Chaotic systems are in a general sense dynamic, intrinsically coherent, deterministic, recursive, reactive and structured: in a specific sense, self-organizing, unpredictable, nonreproducible, triadic, unstable and self-similar. To a great extent, certain concepts of chaos theory can be associated with corresponding concepts in psychiatry, psychology and psychotherapy, thus enabling an understanding of the human psyche in general as a (fractal) chaotic system and an explanation of certain mental developments, such as the course of schizophrenia, the course of psychosis and psychotherapy as chaotic processes. General overview: A short comparison and contrast of classical and chaotic physical theory leads to four postulates and one hypothesis motivating a new, dynamic, nonlinear approach to classical, causal psychiatry: Process-Oriented PSYchiatry or "POPSY", for short. Four aspects of the relationship between chaos theory and POPSY are discussed: (1) The first of these, namely, Identification of Chaos / Picture of Illness involves a definition of Chaos / Psychosis and a discussion of the 6 logical characteristics of each. This leads to the concept of dynamical disease (definition, characteristics and examples) and to the idea of "psychological disturbance as dynamical illness". On the one hand, it is argued that the developmental course of psychosis is chaotic. On the other hand, we propose the hypothesis that the mental state of psychosis may be a linear information processing pathology. (2) The second aspect under discussion is the Assessment of Chaos / Diagnosis of Illness. In order to better understand how POPSY research treats this aspect, we take a look at the 3 different classes of (non-quantum) motion as models of 3 different possible courses of illness and outline present-day methods available for the quantitative assessment of chaotic (fractal) motion. (3) The third aspect, namely. Prediction of Chaos / Prognosis of Illness considers how each of these 3 classes of motion implies a different way of looking into the future: linear-causal, statistical and nonlinear-fractal, respectively (4) The fourth aspect of the relationship between chaos theory and POPSY, Control of Chaos / Treatment of Illness, is shown to have certain implications to complementary medicine. This paper completes with a short summary, conclusion and a closing remark.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grenon, Cedric; Lake, Kayll

    We generalize the Swiss-cheese cosmologies so as to include nonzero linear momenta of the associated boundary surfaces. The evolution of mass scales in these generalized cosmologies is studied for a variety of models for the background without having to specify any details within the local inhomogeneities. We find that the final effective gravitational mass and size of the evolving inhomogeneities depends on their linear momenta but these properties are essentially unaffected by the details of the background model.

  6. Computer-aided linear-circuit design.

    NASA Technical Reports Server (NTRS)

    Penfield, P.

    1971-01-01

    Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.

  7. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?☆

    PubMed Central

    Little, M.P.

    2011-01-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose–response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported statistical associations for cardiovascular disease is unreliable but cannot be excluded. Inflammatory processes are the most likely mechanism by which radiation could modify the atherosclerotic disease process. If there is to be modification by low doses of ionizing radiation of cardiovascular disease through this mechanism, a role for non-DNA-targeted effects cannot be excluded. PMID:20105434

  8. Real-time video signal processing by generalized DDA and control memories: three-dimensional rotation and mapping

    NASA Astrophysics Data System (ADS)

    Hama, Hiromitsu; Yamashita, Kazumi

    1991-11-01

    A new method for video signal processing is described in this paper. The purpose is real-time image transformations at low cost, low power, and small size hardware. This is impossible without special hardware. Here generalized digital differential analyzer (DDA) and control memory (CM) play a very important role. Then indentation, which is called jaggy, is caused on the boundary of a background and a foreground accompanied with the processing. Jaggy does not occur inside the transformed image because of adopting linear interpretation. But it does occur inherently on the boundary of the background and the transformed images. It causes deterioration of image quality, and must be avoided. There are two well-know ways to improve image quality, blurring and supersampling. The former does not have much effect, and the latter has the much higher cost of computing. As a means of settling such a trouble, a method is proposed, which searches for positions that may arise jaggy and smooths such points. Computer simulations based on the real data from VTR, one scene of a movie, are presented to demonstrate our proposed scheme using DDA and CMs and to confirm the effectiveness on various transformations.

  9. Analysis of fracture healing in osteopenic bone caused by disuse: experimental study.

    PubMed

    Paiva, A G; Yanagihara, G R; Macedo, A P; Ramos, J; Issa, J P M; Shimano, A C

    2016-03-01

    Osteoporosis has become a serious global public health issue. Hence, osteoporotic fracture healing has been investigated in several previous studies because there is still controversy over the effect osteoporosis has on the healing process. The current study aimed to analyze two different periods of bone healing in normal and osteopenic rats. Sixty, 7-week-old female Wistar rats were randomly divided into four groups: unrestricted and immobilized for 2 weeks after osteotomy (OU2), suspended and immobilized for 2 weeks after osteotomy (OS2), unrestricted and immobilized for 6 weeks after osteotomy (OU6), and suspended and immobilized for 6 weeks after osteotomy (OS6). Osteotomy was performed in the middle third of the right tibia 21 days after tail suspension, when the osteopenic condition was already set. The fractured limb was then immobilized by orthosis. Tibias were collected 2 and 6 weeks after osteotomy, and were analyzed by bone densitometry, mechanical testing, and histomorphometry. Bone mineral density values from bony calluses were significantly lower in the 2-week post-osteotomy groups compared with the 6-week post-osteotomy groups (multivariate general linear model analysis, P<0.000). Similarly, the mechanical properties showed that animals had stronger bones 6 weeks after osteotomy compared with 2 weeks after osteotomy (multivariate general linear model analysis, P<0.000). Histomorphometry indicated gradual bone healing. Results showed that osteopenia did not influence the bone healing process, and that time was an independent determinant factor regardless of whether the fracture was osteopenic. This suggests that the body is able to compensate for the negative effects of suspension.

  10. GVE-Based Dynamics and Control for Formation Flying Spacecraft

    NASA Technical Reports Server (NTRS)

    Breger, Louis; How, Jonathan P.

    2004-01-01

    Formation flying is an enabling technology for many future space missions. This paper presents extensions to the equations of relative motion expressed in Keplerian orbital elements, including new initialization techniques for general formation configurations. A new linear time-varying form of the equations of relative motion is developed from Gauss Variational Equations and used in a model predictive controller. The linearizing assumptions for these equations are shown to be consistent with typical formation flying scenarios. Several linear, convex initialization techniques are presented, as well as a general, decentralized method for coordinating a tetrahedral formation using differential orbital elements. Control methods are validated using a commercial numerical propagator.

  11. A generalized interval fuzzy mixed integer programming model for a multimodal transportation problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Tian, Wenli; Cao, Chengxuan

    2017-03-01

    A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.

  12. Feature extraction with deep neural networks by a generalized discriminant analysis.

    PubMed

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  13. Teachers' Evaluations and Students' Achievement: A "Deviation from the Reference" Analysis

    ERIC Educational Resources Information Center

    Iacus, Stefano M.; Porro, Giuseppe

    2011-01-01

    Several studies show that teachers make use of grading practices to affect students' effort and achievement. Generally linearity is assumed in the grading equation, while it is everyone's experience that grading practices are frequently non-linear. Representing grading practices as linear can be misleading both from a descriptive and a…

  14. Linear Logistic Test Modeling with R

    ERIC Educational Resources Information Center

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  15. Operator Factorization and the Solution of Second-Order Linear Ordinary Differential Equations

    ERIC Educational Resources Information Center

    Robin, W.

    2007-01-01

    The theory and application of second-order linear ordinary differential equations is reviewed from the standpoint of the operator factorization approach to the solution of ordinary differential equations (ODE). Using the operator factorization approach, the general second-order linear ODE is solved, exactly, in quadratures and the resulting…

  16. Derivation and definition of a linear aircraft model

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1988-01-01

    A linear aircraft model for a rigid aircraft of constant mass flying over a flat, nonrotating earth is derived and defined. The derivation makes no assumptions of reference trajectory or vehicle symmetry. The linear system equations are derived and evaluated along a general trajectory and include both aircraft dynamics and observation variables.

  17. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  18. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  19. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  20. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    PubMed

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  1. Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission

    NASA Astrophysics Data System (ADS)

    Li, Tao; Deng, Fu-Guo

    2015-10-01

    Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication.

  2. An Optimized Multicolor Point-Implicit Solver for Unstructured Grid Applications on Graphics Processing Units

    NASA Technical Reports Server (NTRS)

    Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana

    2016-01-01

    In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.

  3. Reservoir Computing Beyond Memory-Nonlinearity Trade-off.

    PubMed

    Inubushi, Masanobu; Yoshimura, Kazuyuki

    2017-08-31

    Reservoir computing is a brain-inspired machine learning framework that employs a signal-driven dynamical system, in particular harnessing common-signal-induced synchronization which is a widely observed nonlinear phenomenon. Basic understanding of a working principle in reservoir computing can be expected to shed light on how information is stored and processed in nonlinear dynamical systems, potentially leading to progress in a broad range of nonlinear sciences. As a first step toward this goal, from the viewpoint of nonlinear physics and information theory, we study the memory-nonlinearity trade-off uncovered by Dambre et al. (2012). Focusing on a variational equation, we clarify a dynamical mechanism behind the trade-off, which illustrates why nonlinear dynamics degrades memory stored in dynamical system in general. Moreover, based on the trade-off, we propose a mixture reservoir endowed with both linear and nonlinear dynamics and show that it improves the performance of information processing. Interestingly, for some tasks, significant improvements are observed by adding a few linear dynamics to the nonlinear dynamical system. By employing the echo state network model, the effect of the mixture reservoir is numerically verified for a simple function approximation task and for more complex tasks.

  4. Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission.

    PubMed

    Li, Tao; Deng, Fu-Guo

    2015-10-27

    Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication.

  5. Heralded high-efficiency quantum repeater with atomic ensembles assisted by faithful single-photon transmission

    PubMed Central

    Li, Tao; Deng, Fu-Guo

    2015-01-01

    Quantum repeater is one of the important building blocks for long distance quantum communication network. The previous quantum repeaters based on atomic ensembles and linear optical elements can only be performed with a maximal success probability of 1/2 during the entanglement creation and entanglement swapping procedures. Meanwhile, the polarization noise during the entanglement distribution process is harmful to the entangled channel created. Here we introduce a general interface between a polarized photon and an atomic ensemble trapped in a single-sided optical cavity, and with which we propose a high-efficiency quantum repeater protocol in which the robust entanglement distribution is accomplished by the stable spatial-temporal entanglement and it can in principle create the deterministic entanglement between neighboring atomic ensembles in a heralded way as a result of cavity quantum electrodynamics. Meanwhile, the simplified parity-check gate makes the entanglement swapping be completed with unity efficiency, other than 1/2 with linear optics. We detail the performance of our protocol with current experimental parameters and show its robustness to the imperfections, i.e., detuning and coupling variation, involved in the reflection process. These good features make it a useful building block in long distance quantum communication. PMID:26502993

  6. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  7. Physics and control of wall turbulence for drag reduction.

    PubMed

    Kim, John

    2011-04-13

    Turbulence physics responsible for high skin-friction drag in turbulent boundary layers is first reviewed. A self-sustaining process of near-wall turbulence structures is then discussed from the perspective of controlling this process for the purpose of skin-friction drag reduction. After recognizing that key parts of this self-sustaining process are linear, a linear systems approach to boundary-layer control is discussed. It is shown that singular-value decomposition analysis of the linear system allows us to examine different approaches to boundary-layer control without carrying out the expensive nonlinear simulations. Results from the linear analysis are consistent with those observed in full nonlinear simulations, thus demonstrating the validity of the linear analysis. Finally, fundamental performance limit expected of optimal control input is discussed.

  8. Stability and performance analysis of a jump linear control system subject to digital upsets

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Sun, Hui; Ma, Zhen-Yang

    2015-04-01

    This paper focuses on the methodology analysis for the stability and the corresponding tracking performance of a closed-loop digital jump linear control system with a stochastic switching signal. The method is applied to a flight control system. A distributed recoverable platform is implemented on the flight control system and subject to independent digital upsets. The upset processes are used to stimulate electromagnetic environments. Specifically, the paper presents the scenarios that the upset process is directly injected into the distributed flight control system, which is modeled by independent Markov upset processes and independent and identically distributed (IID) processes. A theoretical performance analysis and simulation modelling are both presented in detail for a more complete independent digital upset injection. The specific examples are proposed to verify the methodology of tracking performance analysis. The general analyses for different configurations are also proposed. Comparisons among different configurations are conducted to demonstrate the availability and the characteristics of the design. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61403395), the Natural Science Foundation of Tianjin, China (Grant No. 13JCYBJC39000), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, China, the Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation of China (Grant No. 104003020106), and the Fund for Scholars of Civil Aviation University of China (Grant No. 2012QD21x).

  9. A simple and exploratory way to determine the mean-variance relationship in generalized linear models.

    PubMed

    Tsou, Tsung-Shan

    2007-03-30

    This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.

  10. Quantum learning of classical stochastic processes: The completely positive realization problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monràs, Alex; Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543; Winter, Andreas

    2016-01-15

    Among several tasks in Machine Learning, a specially important one is the problem of inferring the latent variables of a system and their causal relations with the observed behavior. A paradigmatic instance of this is the task of inferring the hidden Markov model underlying a given stochastic process. This is known as the positive realization problem (PRP), [L. Benvenuti and L. Farina, IEEE Trans. Autom. Control 49(5), 651–664 (2004)] and constitutes a central problem in machine learning. The PRP and its solutions have far-reaching consequences in many areas of systems and control theory, and is nowadays an important piece inmore » the broad field of positive systems theory. We consider the scenario where the latent variables are quantum (i.e., quantum states of a finite-dimensional system) and the system dynamics is constrained only by physical transformations on the quantum system. The observable dynamics is then described by a quantum instrument, and the task is to determine which quantum instrument — if any — yields the process at hand by iterative application. We take as a starting point the theory of quasi-realizations, whence a description of the dynamics of the process is given in terms of linear maps on state vectors and probabilities are given by linear functionals on the state vectors. This description, despite its remarkable resemblance with the hidden Markov model, or the iterated quantum instrument, is however devoid of any stochastic or quantum mechanical interpretation, as said maps fail to satisfy any positivity conditions. The completely positive realization problem then consists in determining whether an equivalent quantum mechanical description of the same process exists. We generalize some key results of stochastic realization theory, and show that the problem has deep connections with operator systems theory, giving possible insight to the lifting problem in quotient operator systems. Our results have potential applications in quantum machine learning, device-independent characterization and reverse-engineering of stochastic processes and quantum processors, and more generally, of dynamical processes with quantum memory [M. Guţă, Phys. Rev. A 83(6), 062324 (2011); M. Guţă and N. Yamamoto, e-print http://arxiv.org/abs/1303.3771 (2013)].« less

  11. Progress in linear optics, non-linear optics and surface alignment of liquid crystals

    NASA Astrophysics Data System (ADS)

    Ong, H. L.; Meyer, R. B.; Hurd, A. J.; Karn, A. J.; Arakelian, S. M.; Shen, Y. R.; Sanda, P. N.; Dove, D. B.; Jansen, S. A.; Hoffmann, R.

    We first discuss the progress in linear optics, in particular, the formulation and application of geometrical-optics approximation and its generalization. We then discuss the progress in non-linear optics, in particular, the enhancement of a first-order Freedericksz transition and intrinsic optical bistability in homeotropic and parallel oriented nematic liquid crystal cells. Finally, we discuss the liquid crystal alignment and surface effects on field-induced Freedericksz transition.

  12. Electromagnetic axial anomaly in a generalized linear sigma model

    NASA Astrophysics Data System (ADS)

    Fariborz, Amir H.; Jora, Renata

    2017-06-01

    We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.

  13. Aerodynamic design and analysis system for supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1975-01-01

    An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.

  14. The Development of Web-based Graphical User Interface for Unified Modeling Data with Multi (Correlated) Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian

    2018-04-01

    Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics

    Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less

  16. A statistical analysis of the daily streamflow hydrograph

    NASA Astrophysics Data System (ADS)

    Kavvas, M. L.; Delleur, J. W.

    1984-03-01

    In this study a periodic statistical analysis of daily streamflow data in Indiana, U.S.A., was performed to gain some new insight into the stochastic structure which describes the daily streamflow process. This analysis was performed by the periodic mean and covariance functions of the daily streamflows, by the time and peak discharge -dependent recession limb of the daily streamflow hydrograph, by the time and discharge exceedance level (DEL) -dependent probability distribution of the hydrograph peak interarrival time, and by the time-dependent probability distribution of the time to peak discharge. Some new statistical estimators were developed and used in this study. In general features, this study has shown that: (a) the persistence properties of daily flows depend on the storage state of the basin at the specified time origin of the flow process; (b) the daily streamflow process is time irreversible; (c) the probability distribution of the daily hydrograph peak interarrival time depends both on the occurrence time of the peak from which the inter-arrival time originates and on the discharge exceedance level; and (d) if the daily streamflow process is modeled as the release from a linear watershed storage, this release should depend on the state of the storage and on the time of the release as the persistence properties and the recession limb decay rates were observed to change with the state of the watershed storage and time. Therefore, a time-varying reservoir system needs to be considered if the daily streamflow process is to be modeled as the release from a linear watershed storage.

  17. Decision-Related Activity in Macaque V2 for Fine Disparity Discrimination Is Not Compatible with Optimal Linear Readout

    PubMed Central

    Clery, Stephane; Cumming, Bruce G.

    2017-01-01

    Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal “noise” correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. SIGNIFICANCE STATEMENT Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. PMID:28100751

  18. Evolutionary pulsational mode dynamics in nonthermal turbulent viscous astrofluids

    NASA Astrophysics Data System (ADS)

    Karmakar, Pralay Kumar; Dutta, Pranamika

    2017-11-01

    The pulsational mode of gravitational collapse in a partially ionized self-gravitating inhomogeneous viscous nonthermal nonextensive astrofluid in the presence of turbulence pressure is illustratively analyzed. The constitutive thermal species, lighter electrons and ions, are thermostatistically treated with the nonthermal κ-distribution laws. The inertial species, such as identical heavier neutral and charged dust microspheres, are modelled in the turbulent fluid framework. All the possible linear processes responsible for dust-dust collisions are accounted. The Larson logatropic equations of state relating the dust thermal (linear) and turbulence (nonlinear) pressures with dust densities are included. A regular linear normal perturbation analysis (local) over the complex astrocloud ensues in a generalized quartic dispersion relation with unique nature of plasma-dependent multi-parametric coefficients. A numerical standpoint is provided to showcase the basic mode features in a judicious astronomical paradigm. It is shown that both the kinematic viscosity of the dust fluids and nonthermality parameter (kappa, the power-law tail index) of the thermal species act as stabilizing (damping) agent against the gravity; and so forth. The underlying evolutionary microphysics is explored. The significance of redistributing astrofluid material via waveinduced accretion in dynamic nonhomologic structureless cloud collapse leading to hierarchical astrostructure formation is actualized.

  19. Models for attenuation in marine sediments that incorporate structural relaxation processes

    NASA Astrophysics Data System (ADS)

    Pierce, Allan D.; Carey, William M.; Lynch, James F.

    2005-04-01

    Biot's model leads to an attenuation coefficient at low frequencies that is proportional to ω2, and such is consistent with physical models of viscous attenuation of fluid flows through narrow constrictions driven by pressure differences between larger fluid pockets within the granular configuration. Much data suggests, however, that the attenuation coefficient is linear in ω for some sediments and over a wide range of frequencies. A common model that predicts such a dependence stems from theoretical work by Stoll and Bryan [J. Acoust. Soc. Am. 47, 1440 (1970)], in which the elastic constants of the solid frame are taken to be complex numbers, with small constant imaginary parts. Such invariably leads to a linear ω dependence at sufficiently low frequencies and this conflicts with common intuitive notions. The present paper incorporates structural relaxation, with a generalization of the formulations of Hall [Phys. Rev. 73, 775 (1948)] and Nachman, Smith, and Waag [J. Acoust. Soc. Am. 88, 1584 (1990)]. The mathematical form and plausibility of such is established, and it is shown that the dependence is as ω2 at low frequencies, and that a likely realization is one where the dependence is linear in ω at intermediate frequency ranges.

  20. Test-particle simulations in increasingly strong turbulence

    NASA Technical Reports Server (NTRS)

    Pontius, D. H., Jr.; Gray, P. C.; Matthaeus, W. H.

    1995-01-01

    Quasi-linear theory supposes that the energy in resonant fluctuations is small compared to that in the mean magnetic field. This is evident in the fact that the zeroth-order particle trajectories are helices about a mean field B(sub o) that is spatially uniform over many correlation lengths. However, in the solar wind it is often the case that the fluctuating part of the field is comparable in magnitude to the mean part. It is generally expected that quasi-linear theory remains viable for particles that are in resonance with a region of the fluctuation spectrum having only small energy density, but even so, care must be taken when comparing simulations to theoretical predictions. We have performed a series of test-particle simulations to explore the evolution of ion distributions in turbulent situations with varying levels of magnetic fluctuations. As delta-B/B(sub o) is increased the distinctions among absolute pitch angle (defined relative to B(sub o)), local pitch angle (defined relative to B(x)), and magnetic moment become important, some of them exhibiting periodic sloshing unrelated to the nonadiabatic processes of interest. Comparing and contrasting the various runs illustrates the phenomena that must be considered when the premise underlying quasi-linear theory are relaxed.

  1. Generalized concurrence in boson sampling.

    PubMed

    Chin, Seungbeom; Huh, Joonsuk

    2018-04-17

    A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named "Fock state concurrence sum" C S , which is the summation over all the members of "the generalized Fock state concurrence" (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on C S . Therefore, we can state that the Fock state concurrence sum C S behaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.

  2. Nonlinear Excitation of Inviscid Stationary Vortex in a Boundary-Layer Flow

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Duck, Peter W.

    1996-01-01

    We examine the excitation of inviscid stationary crossflow instabilities near an isolated surface hump (or indentation) underneath a three-dimensional boundary layer. As the hump height (or indentation depth) is increased from zero, the receptivity process becomes nonlinear even before the stability characteristics of the boundary layer are modified to a significant extent. This behavior contrasts sharply with earlier findings on the excitation of the lower branch Tollmien-Schlichting modes and is attributed to the inviscid nature of the crossflow modes, which leads to a decoupling between the regions of receptivity and stability. As a result of this decoupling, similarity transformations exist that allow the nonlinear receptivity of a general three-dimensional boundary layer to be studied with a set of canonical solutions to the viscous sublayer equations. The parametric study suggests that the receptivity is likely to become nonlinear even before the hump height becomes large enough for flow reversal to occur in the canonical solution. We also find that the receptivity to surface humps increases more rapidly as the hump height increases than is predicted by linear theory. On the other hand, receptivity near surface indentations is generally smaller in comparison with the linear approximation. Extension of the work to crossflow receptivity in compressible boundary layers and to Gortler vortex excitation is also discussed.

  3. Natural recovery of biological soil crusts after disturbance

    USGS Publications Warehouse

    Weber, Bettina; Bowker, Matthew A.; Zhang, Yuanming; Belnap, Jayne

    2016-01-01

    Natural recovery of biological soil crusts (biocrusts) is influenced by a number of different parameters, such as climate, soil conditions, the severity of disturbance, and the timing of disturbance relative to the climatic conditions. In recent studies, it has been shown that recovery is often not linear, but a highly dynamic process directly influenced by non-linear external parameters as extraordinary climatic conditions (e.g., particularly dry or wet year). Natural recovery often follows a general succession pattern, starting out with cyanobacteria and algae, which is then followed by lichens and bryophytes at a later stage. However, this general sequence can be altered by parameters like dust deposition, fire effects, and special climatic conditions as in fog deserts and under mesic climates. Recent studies have proposed that under favorable, stable soil conditions, the initial soil-stabilizing cyanobacteria-dominated succession stages may be omitted and moss-dominated biocrusts can develop in the initial phases of biocrust development. During natural recovery of biocrusts, soil properties change, e.g., soil nutrient and organic matter contents increase. Also, silt and clay contents of encrusted soils increase with biocrust maturity, which may be caused by two mechanisms, i.e. entrapment of fine soil particles by biocrusts and the new formation of smaller particles by weathering of the existing substrate.

  4. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  5. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  6. Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing

    NASA Astrophysics Data System (ADS)

    Tian, Q.; Fainman, Y.; Lee, Sing H.

    1989-02-01

    The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.

  7. Bayesian block-diagonal variable selection and model averaging

    PubMed Central

    Papaspiliopoulos, O.; Rossell, D.

    2018-01-01

    Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501

  8. Aspects of general higher-order gravities

    NASA Astrophysics Data System (ADS)

    Bueno, Pablo; Cano, Pablo A.; Min, Vincent S.; Visser, Manus R.

    2017-02-01

    We study several aspects of higher-order gravities constructed from general contractions of the Riemann tensor and the metric in arbitrary dimensions. First, we use the fast-linearization procedure presented in [P. Bueno and P. A. Cano, arXiv:1607.06463] to obtain the equations satisfied by the metric perturbation modes on a maximally symmetric background in the presence of matter and to classify L (Riemann ) theories according to their spectrum. Then, we linearize all theories up to quartic order in curvature and use this result to construct quartic versions of Einsteinian cubic gravity. In addition, we show that the most general cubic gravity constructed in a dimension-independent way and which does not propagate the ghostlike spin-2 mode (but can propagate the scalar) is a linear combination of f (Lovelock ) invariants, plus the Einsteinian cubic gravity term, plus a new ghost-free gravity term. Next, we construct the generalized Newton potential and the post-Newtonian parameter γ for general L (Riemann ) gravities in arbitrary dimensions, unveiling some interesting differences with respect to the four-dimensional case. We also study the emission and propagation of gravitational radiation from sources for these theories in four dimensions, providing a generalized formula for the power emitted. Finally, we review Wald's formalism for general L (Riemann ) theories and construct new explicit expressions for the relevant quantities involved. Many examples illustrate our calculations.

  9. Linear Equating for the NEAT Design: Parameter Substitution Models and Chained Linear Relationship Models

    ERIC Educational Resources Information Center

    Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.

    2009-01-01

    This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…

  10. Validation of drift and diffusion coefficients from experimental data

    NASA Astrophysics Data System (ADS)

    Riera, R.; Anteneodo, C.

    2010-04-01

    Many fluctuation phenomena, in physics and other fields, can be modeled by Fokker-Planck or stochastic differential equations whose coefficients, associated with drift and diffusion components, may be estimated directly from the observed time series. Its correct characterization is crucial to determine the system quantifiers. However, due to the finite sampling rates of real data, the empirical estimates may significantly differ from their true functional forms. In the literature, low-order corrections, or even no corrections, have been applied to the finite-time estimates. A frequent outcome consists of linear drift and quadratic diffusion coefficients. For this case, exact corrections have been recently found, from Itô-Taylor expansions. Nevertheless, model validation constitutes a necessary step before determining and applying the appropriate corrections. Here, we exploit the consequences of the exact theoretical results obtained for the linear-quadratic model. In particular, we discuss whether the observed finite-time estimates are actually a manifestation of that model. The relevance of this analysis is put into evidence by its application to two contrasting real data examples in which finite-time linear drift and quadratic diffusion coefficients are observed. In one case the linear-quadratic model is readily rejected while in the other, although the model constitutes a very good approximation, low-order corrections are inappropriate. These examples give warning signs about the proper interpretation of finite-time analysis even in more general diffusion processes.

  11. Modelling leaf photosynthetic and transpiration temperature-dependent responses in Vitis vinifera cv. Semillon grapevines growing in hot, irrigated vineyard conditions

    PubMed Central

    Greer, Dennis H.

    2012-01-01

    Background and aims Grapevines growing in Australia are often exposed to very high temperatures and the question of how the gas exchange processes adjust to these conditions is not well understood. The aim was to develop a model of photosynthesis and transpiration in relation to temperature to quantify the impact of the growing conditions on vine performance. Methodology Leaf gas exchange was measured along the grapevine shoots in accordance with their growth and development over several growing seasons. Using a general linear statistical modelling approach, photosynthesis and transpiration were modelled against leaf temperature separated into bands and the model parameters and coefficients applied to independent datasets to validate the model. Principal results Photosynthesis, transpiration and stomatal conductance varied along the shoot, with early emerging leaves having the highest rates, but these declined as later emerging leaves increased their gas exchange capacities in accordance with development. The general linear modelling approach applied to these data revealed that photosynthesis at each temperature was additively dependent on stomatal conductance, internal CO2 concentration and photon flux density. The temperature-dependent coefficients for these parameters applied to other datasets gave a predicted rate of photosynthesis that was linearly related to the measured rates, with a 1 : 1 slope. Temperature-dependent transpiration was multiplicatively related to stomatal conductance and the leaf to air vapour pressure deficit and applying the coefficients also showed a highly linear relationship, with a 1 : 1 slope between measured and modelled rates, when applied to independent datasets. Conclusions The models developed for the grapevines were relatively simple but accounted for much of the seasonal variation in photosynthesis and transpiration. The goodness of fit in each case demonstrated that explicitly selecting leaf temperature as a model parameter, rather than including temperature intrinsically as is usually done in more complex models, was warranted. PMID:22567220

  12. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  13. Pilots Rate Augmented Generalized Predictive Control for Reconfiguration

    NASA Technical Reports Server (NTRS)

    Soloway, Don; Haley, Pam

    2004-01-01

    The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.

  14. Generalized statistical mechanics of cosmic rays: Application to positron-electron spectral indices.

    PubMed

    Yalcin, G Cigdem; Beck, Christian

    2018-01-29

    Cosmic ray energy spectra exhibit power law distributions over many orders of magnitude that are very well described by the predictions of q-generalized statistical mechanics, based on a q-generalized Hagedorn theory for transverse momentum spectra and hard QCD scattering processes. QCD at largest center of mass energies predicts the entropic index to be [Formula: see text]. Here we show that the escort duality of the nonextensive thermodynamic formalism predicts an energy split of effective temperature given by Δ [Formula: see text] MeV, where T H is the Hagedorn temperature. We carefully analyse the measured data of the AMS-02 collaboration and provide evidence that the predicted temperature split is indeed observed, leading to a different energy dependence of the e + and e - spectral indices. We also observe a distinguished energy scale E *  ≈ 50 GeV where the e + and e - spectral indices differ the most. Linear combinations of the escort and non-escort q-generalized canonical distributions yield excellent agreement with the measured AMS-02 data in the entire energy range.

  15. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  16. Modelling female fertility traits in beef cattle using linear and non-linear models.

    PubMed

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  < 0.08 and r < 0.13, for linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  17. A decentralized square root information filter/smoother

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Belzer, M. R.

    1985-01-01

    A number of developments has recently led to a considerable interest in the decentralization of linear least squares estimators. The developments are partly related to the impending emergence of VLSI technology, the realization of parallel processing, and the need for algorithmic ways to speed the solution of dynamically decoupled, high dimensional estimation problems. A new method is presented for combining Square Root Information Filters (SRIF) estimates obtained from independent data sets. The new method involves an orthogonal transformation, and an information matrix filter 'homework' problem discussed by Schweppe (1973) is generalized. The employed SRIF orthogonal transformation methodology has been described by Bierman (1977).

  18. Zonal flow as pattern formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Jeffrey B.; Krommes, John A.

    2013-10-15

    Zonal flows are well known to arise spontaneously out of turbulence. We show that for statistically averaged equations of the stochastically forced generalized Hasegawa-Mima model, steady-state zonal flows, and inhomogeneous turbulence fit into the framework of pattern formation. There are many implications. First, the wavelength of the zonal flows is not unique. Indeed, in an idealized, infinite system, any wavelength within a certain continuous band corresponds to a solution. Second, of these wavelengths, only those within a smaller subband are linearly stable. Unstable wavelengths must evolve to reach a stable wavelength; this process manifests as merging jets.

  19. 32 bit digital optical computer - A hardware update

    NASA Technical Reports Server (NTRS)

    Guilfoyle, Peter S.; Carter, James A., III; Stone, Richard V.; Pape, Dennis R.

    1990-01-01

    Such state-of-the-art devices as multielement linear laser diode arrays, multichannel acoustooptic modulators, optical relays, and avalanche photodiode arrays, are presently applied to the implementation of a 32-bit supercomputer's general-purpose optical central processing architecture. Shannon's theorem, Morozov's control operator method (in conjunction with combinatorial arithmetic), and DeMorgan's law have been used to design an architecture whose 100 MHz clock renders it fully competitive with emerging planar-semiconductor technology. Attention is given to the architecture's multichannel Bragg cells, thermal design and RF crosstalk considerations, and the first and second anamorphic relay legs.

  20. A Constant Percentage Bandwidth Transform for Acoustic Signal Processing

    DTIC Science & Technology

    1980-01-01

    t)eJwt d . 27Th (0) Equation 2.9 is not, however, the most general form for short-time Fourier synthesis, but is in fact a particular case of the...form of the analysis integral. F(Wk? t)eJwkt = f(t) * h( t)’ukt (3.4) Fourier transforming both sides of this equation (and invoking the convolution...properties. In what follows, define 38 f(t) - F(w,t) (3.24) to be an equivalent statement to equation 3.1. 3.6.1 Linearity Property If F1 w,t) and F2 (w

Top