Sample records for linear estimation theory

  1. Estimating cosmic velocity fields from density fields and tidal tensors

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-10-01

    In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.

  2. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  3. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  4. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  5. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research directions in the fields of digital signal processing and modern control and estimation theory are discussed. Stability theory, linear prediction and parameter identification, system synthesis and implementation, two-dimensional filtering, decentralized control and estimation, and image processing are considered in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the disciplines.

  6. Quasi-Newton methods for parameter estimation in functional differential equations

    NASA Technical Reports Server (NTRS)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  7. Probability theory, not the very guide of life.

    PubMed

    Juslin, Peter; Nilsson, Håkan; Winman, Anders

    2009-10-01

    Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.

  8. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.

  9. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  10. Linear system theory

    NASA Technical Reports Server (NTRS)

    Callier, Frank M.; Desoer, Charles A.

    1991-01-01

    The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.

  11. Turbulent Reconnection Rates from Cluster Observations in the Magnetosheath

    NASA Technical Reports Server (NTRS)

    Wendel, Deirdre

    2011-01-01

    The role of turbulence in producing fast reconnection rates is an important unresolved question. Scant in situ analyses exist. We apply multiple spacecraft techniques to a case of nonlinear turbulent reconnection in the magnetosheath to test various theoretical results for turbulent reconnection rates. To date, in situ estimates of the contribution of turbulence to reconnection rates have been calculated from an effective electric field derived through linear wave theory. However, estimates of reconnection rates based on fully nonlinear turbulence theories and simulations exist that are amenable to multiple spacecraft analyses. Here we present the linear and nonlinear theories and apply some of the nonlinear rates to Cluster observations of reconnecting, turbulent current sheets in the magnetosheath. We compare the results to the net reconnection rate found from the inflow speed. Ultimately, we intend to test and compare linear and nonlinear estimates of the turbulent contribution to reconnection rates and to measure the relative contributions of turbulence and the Hall effect.

  12. Turbulent Reconnection Rates from Cluster Observations in the Magneto sheath

    NASA Technical Reports Server (NTRS)

    Wendel, Deirdre

    2011-01-01

    The role of turbulence in producing fast reconnection rates is an important unresolved question. Scant in situ analyses exist. We apply multiple spacecraft techniques to a case of nonlinear turbulent reconnection in the magnetosheath to test various theoretical results for turbulent reconnection rates. To date, in situ estimates of the contribution of turbulence to reconnection rates have been calculated from an effective electric field derived through linear wave theory. However, estimates of reconnection rates based on fully nonlinear turbulence theories and simulations exist that are amenable to multiple spacecraft analyses. Here we present the linear and nonlinear theories and apply some of the nonlinear rates to Cluster observations of reconnecting, turbulent current sheets in the magnetos heath. We compare the results to the net reconnection rate found from the inflow speed. Ultimately, we intend to test and compare linear and nonlinear estimates of the turbulent contribution to reconnection rates and to measure the relative contributions of turbulence and the Hall effect.

  13. Linear and nonlinear 2D finite element analysis of sloshing modes and pressures in rectangular tanks subject to horizontal harmonic motions

    NASA Astrophysics Data System (ADS)

    Virella, Juan C.; Prato, Carlos A.; Godoy, Luis A.

    2008-05-01

    The influence of nonlinear wave theory on the sloshing natural periods and their modal pressure distributions are investigated for rectangular tanks under the assumption of two-dimensional behavior. Natural periods and mode shapes are computed and compared for both linear wave theory (LWT) and nonlinear wave theory (NLWT) models, using the finite element package ABAQUS. Linear wave theory is implemented in an acoustic model, whereas a plane strain problem with large displacements is used in NLWT. Pressure distributions acting on the tank walls are obtained for the first three sloshing modes using both linear and nonlinear wave theory. It is found that the nonlinearity does not have significant effects on the natural sloshing periods. For the sloshing pressures on the tank walls, different distributions were found using linear and nonlinear wave theory models. However, in all cases studied, the linear wave theory conservatively estimated the magnitude of the pressure distribution, whereas larger pressures resultant heights were obtained when using the nonlinear theory. It is concluded that the nonlinearity of the surface wave does not have major effects in the pressure distribution on the walls for rectangular tanks.

  14. Attitude estimation of earth orbiting satellites by decomposed linear recursive filters

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1975-01-01

    Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.

  15. Survey and analysis of research on supersonic drag-due-to-lift minimization with recommendations for wing design

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Mann, Michael J.

    1992-01-01

    A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.

  16. The Hagen-Poiseuille, Plane Couette and Poiseuille Flows Linear Instability and Rogue Waves Excitation Mechanism

    NASA Astrophysics Data System (ADS)

    Chefranov, Sergey; Chefranov, Alexander

    2016-04-01

    Linear hydrodynamic stability theory for the Hagen-Poiseuille (HP) flow yields a conclusion of infinitely large threshold Reynolds number, Re, value. This contradiction to the observation data is bypassed using assumption of the HP flow instability having hard type and possible for sufficiently high-amplitude disturbances. HP flow disturbance evolution is considered by nonlinear hydrodynamic stability theory. Similar is the case of the plane Couette (PC) flow. For the plane Poiseuille (PP) flow, linear theory just quantitatively does not agree with experimental data defining the threshold Reynolds number Re= 5772 ( S. A. Orszag, 1971), more than five-fold exceeding however the value observed, Re=1080 (S. J. Davies, C. M. White, 1928). In the present work, we show that the linear stability theory conclusions for the HP and PC on stability for any Reynolds number and evidently too high threshold Reynolds number estimate for the PP flow are related with the traditional use of the disturbance representation assuming the possibility of separation of the longitudinal (along the flow direction) variable from the other spatial variables. We show that if to refuse from this traditional form, conclusions on the linear instability for the HP and PC flows may be obtained for finite Reynolds numbers (for the HP flow, for Re>704, and for the PC flow, for Re>139). Also, we fit the linear stability theory conclusion on the PP flow to the experimental data by getting an estimate of the minimal threshold Reynolds number as Re=1040. We also get agreement of the minimal threshold Reynolds number estimate for PC with the experimental data of S. Bottin, et.al., 1997, where the laminar PC flow stability threshold is Re = 150. Rogue waves excitation mechanism in oppositely directed currents due to the PC flow linear instability is discussed. Results of the new linear hydrodynamic stability theory for the HP, PP, and PC flows are published in the following papers: 1. S.G. Chefranov, A.G. Chefranov, JETP, v.119, No.2, 331, 2014 2. S.G. Chefranov, A.G. Chefranov, Doklady Physics, vol.60, No.7, 327-332, 2015 3. S.G. Chefranov, A. G. Chefranov, arXiv: 1509.08910v1 [physics.flu-dyn] 29 Sep 2015 (accepted to JETP)

  17. Semigroup theory and numerical approximation for equations in linear viscoelasticity

    NASA Technical Reports Server (NTRS)

    Fabiano, R. H.; Ito, K.

    1990-01-01

    A class of abstract integrodifferential equations used to model linear viscoelastic beams is investigated analytically, applying a Hilbert-space approach. The basic equation is rewritten as a Cauchy problem, and its well-posedness is demonstrated. Finite-dimensional subspaces of the state space and an estimate of the state operator are obtained; approximation schemes for the equations are constructed; and the convergence is proved using the Trotter-Kato theorem of linear semigroup theory. The actual convergence behavior of different approximations is demonstrated in numerical computations, and the results are presented in tables.

  18. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  19. Nonlinear Large Deflection Theory with Modified Aeroelastic Lifting Line Aerodynamics for a High Aspect Ratio Flexible Wing

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Ting, Eric; Chaparro, Daniel

    2017-01-01

    This paper investigates the effect of nonlinear large deflection bending on the aerodynamic performance of a high aspect ratio flexible wing. A set of nonlinear static aeroelastic equations are derived for the large bending deflection of a high aspect ratio wing structure. An analysis is conducted to compare the nonlinear bending theory with the linear bending theory. The results show that the nonlinear bending theory is length-preserving whereas the linear bending theory causes a non-physical effect of lengthening the wing structure under the no axial load condition. A modified lifting line theory is developed to compute the lift and drag coefficients of a wing structure undergoing a large bending deflection. The lift and drag coefficients are more accurately estimated by the nonlinear bending theory due to its length-preserving property. The nonlinear bending theory yields lower lift and span efficiency than the linear bending theory. A coupled aerodynamic-nonlinear finite element model is developed to implement the nonlinear bending theory for a Common Research Model (CRM) flexible wing wind tunnel model to be tested in the University of Washington Aeronautical Laboratory (UWAL). The structural stiffness of the model is designed to give about 10% wing tip deflection which is large enough that could cause the nonlinear deflection effect to become significant. The computational results show that the nonlinear bending theory yields slightly less lift than the linear bending theory for this wind tunnel model. As a result, the linear bending theory is deemed adequate for the CRM wind tunnel model.

  20. Asymptotic stability estimates near an equilibrium point

    NASA Astrophysics Data System (ADS)

    Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia

    2017-07-01

    We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.

  1. A Reduced Dimension Static, Linearized Kalman Filter and Smoother

    NASA Technical Reports Server (NTRS)

    Fukumori, I.

    1995-01-01

    An approximate Kalman filter and smoother, based on approximations of the state estimation error covariance matrix, is described. Approximations include a reduction of the effective state dimension, use of a static asymptotic error limit, and a time-invariant linearization of the dynamic model for error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. Examples of use come from TOPEX/POSEIDON.

  2. Linear instability of plane Couette and Poiseuille flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chefranov, S. G., E-mail: schefranov@mail.ru; Chefranov, A. G., E-mail: Alexander.chefranov@emu.edu.tr

    2016-05-15

    It is shown that linear instability of plane Couette flow can take place even at finite Reynolds numbers Re > Re{sub th} ≈ 139, which agrees with the experimental value of Re{sub th} ≈ 150 ± 5 [16, 17]. This new result of the linear theory of hydrodynamic stability is obtained by abandoning traditional assumption of the longitudinal periodicity of disturbances in the flow direction. It is established that previous notions about linear stability of this flow at arbitrarily large Reynolds numbers relied directly upon the assumed separation of spatial variables of the field of disturbances and their longitudinal periodicitymore » in the linear theory. By also abandoning these assumptions for plane Poiseuille flow, a new threshold Reynolds number Re{sub th} ≈ 1035 is obtained, which agrees to within 4% with experiment—in contrast to 500% discrepancy for the previous estimate of Re{sub th} ≈ 5772 obtained in the framework of the linear theory under assumption of the “normal” shape of disturbances [2].« less

  3. Inverse Theory for Petroleum Reservoir Characterization and History Matching

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.; Reynolds, Albert C.; Liu, Ning

    This book is a guide to the use of inverse theory for estimation and conditional simulation of flow and transport parameters in porous media. It describes the theory and practice of estimating properties of underground petroleum reservoirs from measurements of flow in wells, and it explains how to characterize the uncertainty in such estimates. Early chapters present the reader with the necessary background in inverse theory, probability and spatial statistics. The book demonstrates how to calculate sensitivity coefficients and the linearized relationship between models and production data. It also shows how to develop iterative methods for generating estimates and conditional realizations. The text is written for researchers and graduates in petroleum engineering and groundwater hydrology and can be used as a textbook for advanced courses on inverse theory in petroleum engineering. It includes many worked examples to demonstrate the methodologies and a selection of exercises.

  4. A theoretical signal processing framework for linear diffusion MRI: Implications for parameter estimation and experiment design.

    PubMed

    Varadarajan, Divya; Haldar, Justin P

    2017-11-01

    The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Comparing Consider-Covariance Analysis with Sigma-Point Consider Filter and Linear-Theory Consider Filter Formulations

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E.

    2007-01-01

    Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to nonlinear sequential consider covariance analysis, i.e. in the presence of nonlinear dynamics and nonlinear measurements. A simple SPCF for orbit determination, exemplifying an algorithm hosted in the guidance, navigation and control (GN&C) computer processor of a hypothetical robotic spacecraft, was implemented, and compared with an identically-parameterized (standard) extended, consider-parameterized Kalman filter. The onboard filtering scenario examined is a hypothetical spacecraft orbit about a small natural body with imperfectly-known mass. The formulations, relative complexities, and performances of the filters are compared and discussed.

  6. Applicability of linearized-theory attached-flow methods to design and analysis of flap systems at low speeds for thin swept wings with sharp leading edges

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1987-01-01

    Low-speed experimental force and data on a series of thin swept wings with sharp leading edges and leading and trailing-edge flaps are compared with predictions made using a linearized-theory method which includes estimates of vortex forces. These comparisons were made to assess the effectiveness of linearized-theory methods for use in the design and analysis of flap systems in subsonic flow. Results demonstrate that linearized-theory, attached-flow methods (with approximate representation of vortex forces) can form the basis of a rational system for flap design and analysis. Even attached-flow methods that do not take vortex forces into account can be used for the selection of optimized flap-system geometry, but design-point performance levels tend to be underestimated unless vortex forces are included. Illustrative examples of the use of these methods in the design of efficient low-speed flap systems are included.

  7. Nonlinear Statistical Estimation with Numerical Maximum Likelihood

    DTIC Science & Technology

    1974-10-01

    probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator

  8. Using IRT Trait Estimates versus Summated Scores in Predicting Outcomes

    ERIC Educational Resources Information Center

    Xu, Ting; Stone, Clement A.

    2012-01-01

    It has been argued that item response theory trait estimates should be used in analyses rather than number right (NR) or summated scale (SS) scores. Thissen and Orlando postulated that IRT scaling tends to produce trait estimates that are linearly related to the underlying trait being measured. Therefore, IRT trait estimates can be more useful…

  9. Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound

    NASA Astrophysics Data System (ADS)

    Omidi, Nazanin

    In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.

  10. Phase estimation of coherent states with a noiseless linear amplifier

    NASA Astrophysics Data System (ADS)

    Assad, Syed M.; Bradshaw, Mark; Lam, Ping Koy

    Amplification of quantum states is inevitably accompanied with the introduction of noise at the output. For protocols that are probabilistic with heralded success, noiseless linear amplification in theory may still be possible. When the protocol is successful, it can lead to an output that is a noiselessly amplified copy of the input. When the protocol is unsuccessful, the output state is degraded and is usually discarded. Probabilistic protocols may improve the performance of some quantum information protocols, but not for metrology if the whole statistics is taken into consideration. We calculate the precision limits on estimating the phase of coherent states using a noiseless linear amplifier by computing its quantum Fisher information and we show that on average, the noiseless linear amplifier does not improve the phase estimate. We also discuss the case where abstention from measurement can reduce the cost for estimation.

  11. An approximation theory for the identification of linear thermoelastic systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Su, Chien-Hua Frank

    1990-01-01

    An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.

  12. Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Adamian, A.

    1988-01-01

    An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.

  13. Theory, Guidance, and Flight Control for High Maneuverability Projectiles

    DTIC Science & Technology

    2014-01-01

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining...2.8 Linear System Modeling with Time Delay ...................................................................22 2.9 Linear System Modeling Without... Time Delay .............................................................23 3. Guidance and Flight Control 24 3.1 Proportional Navigation Guidance Law

  14. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  15. An analysis of approach navigation accuracy and guidance requirements for the grand tour mission to the outer planets

    NASA Technical Reports Server (NTRS)

    Jones, D. W.

    1971-01-01

    The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.

  16. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  17. Simple estimation of linear 1+1 D tsunami run-up

    NASA Astrophysics Data System (ADS)

    Fuentes, M.; Campos, J. A.; Riquelme, S.

    2016-12-01

    An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.

  18. Boundary Korn Inequality and Neumann Problems in Homogenization of Systems of Elasticity

    NASA Astrophysics Data System (ADS)

    Geng, Jun; Shen, Zhongwei; Song, Liang

    2017-06-01

    This paper is concerned with a family of elliptic systems of linear elasticity with rapidly oscillating periodic coefficients, arising in the theory of homogenization. We establish uniform optimal regularity estimates for solutions of Neumann problems in a bounded Lipschitz domain with L 2 boundary data. The proof relies on a boundary Korn inequality for solutions of systems of linear elasticity and uses a large-scale Rellich estimate obtained in Shen (Anal PDE, arXiv:1505.00694v2).

  19. Guaiacol hydrodeoxygenation mechanism on Pt(111): Insights from density functional theory and linear free energy relations

    USDA-ARS?s Scientific Manuscript database

    In this study density functional theory (DFT) was used to study the adsorption of guaiacol and its initial hydrodeoxygenation (HDO) reactions on Pt(111). Previously reported Brønsted–Evans–Polanyi (BEP) correlations for small open chain molecules are found to be inadequate in estimating the reaction...

  20. Non-linear feedback control of the p53 protein-mdm2 inhibitor system using the derivative-free non-linear Kalman filter.

    PubMed

    Rigatos, Gerasimos G

    2016-06-01

    It is proven that the model of the p53-mdm2 protein synthesis loop is a differentially flat one and using a diffeomorphism (change of state variables) that is proposed by differential flatness theory it is shown that the protein synthesis model can be transformed into the canonical (Brunovsky) form. This enables the design of a feedback control law that maintains the concentration of the p53 protein at the desirable levels. To estimate the non-measurable elements of the state vector describing the p53-mdm2 system dynamics, the derivative-free non-linear Kalman filter is used. Moreover, to compensate for modelling uncertainties and external disturbances that affect the p53-mdm2 system, the derivative-free non-linear Kalman filter is re-designed as a disturbance observer. The derivative-free non-linear Kalman filter consists of the Kalman filter recursion applied on the linearised equivalent of the protein synthesis model together with an inverse transformation based on differential flatness theory that enables to retrieve estimates for the state variables of the initial non-linear model. The proposed non-linear feedback control and perturbations compensation method for the p53-mdm2 system can result in more efficient chemotherapy schemes where the infusion of medication will be better administered.

  1. On estimation of linear transformation models with nested case–control sampling

    PubMed Central

    Liu, Mengling

    2011-01-01

    Nested case–control (NCC) sampling is widely used in large epidemiological cohort studies for its cost effectiveness, but its data analysis primarily relies on the Cox proportional hazards model. In this paper, we consider a family of linear transformation models for analyzing NCC data and propose an inverse selection probability weighted estimating equation method for inference. Consistency and asymptotic normality of our estimators for regression coefficients are established. We show that the asymptotic variance has a closed analytic form and can be easily estimated. Numerical studies are conducted to support the theory and an application to the Wilms’ Tumor Study is also given to illustrate the methodology. PMID:21912975

  2. On the pth moment estimates of solutions to stochastic functional differential equations in the G-framework.

    PubMed

    Faizullah, Faiz

    2016-01-01

    The aim of the current paper is to present the path-wise and moment estimates for solutions to stochastic functional differential equations with non-linear growth condition in the framework of G-expectation and G-Brownian motion. Under the nonlinear growth condition, the pth moment estimates for solutions to SFDEs driven by G-Brownian motion are proved. The properties of G-expectations, Hölder's inequality, Bihari's inequality, Gronwall's inequality and Burkholder-Davis-Gundy inequalities are used to develop the above mentioned theory. In addition, the path-wise asymptotic estimates and continuity of pth moment for the solutions to SFDEs in the G-framework, with non-linear growth condition are shown.

  3. Koopman operator theory: Past, present, and future

    NASA Astrophysics Data System (ADS)

    Brunton, Steven; Kaiser, Eurika; Kutz, Nathan

    2017-11-01

    Koopman operator theory has emerged as a dominant method to represent nonlinear dynamics in terms of an infinite-dimensional linear operator. The Koopman operator acts on the space of all possible measurement functions of the system state, advancing these measurements with the flow of the dynamics. A linear representation of nonlinear dynamics has tremendous potential to enable the prediction, estimation, and control of nonlinear systems with standard textbook methods developed for linear systems. Dynamic mode decomposition has become the leading data-driven method to approximate the Koopman operator, although there are still open questions and challenges around how to obtain accurate approximations for strongly nonlinear systems. This talk will provide an introductory overview of modern Koopman operator theory, reviewing the basics and describing recent theoretical and algorithmic developments. Particular emphasis will be placed on the use of data-driven Koopman theory to characterize and control high-dimensional fluid dynamic systems. This talk will also address key advances in the rapidly growing fields of machine learning and data science that are likely to drive future developments.

  4. The infinum principle

    NASA Technical Reports Server (NTRS)

    Geering, H. P.; Athans, M.

    1973-01-01

    A complete theory of necessary and sufficient conditions is discussed for a control to be superior with respect to a nonscalar-valued performance criterion. The latter maps into a finite dimensional, integrally closed directed, partially ordered linear space. The applicability of the theory to the analysis of dynamic vector estimation problems and to a class of uncertain optimal control problems is demonstrated.

  5. Linear theory for filtering nonlinear multiscale systems with model error

    PubMed Central

    Berry, Tyrus; Harlim, John

    2014-01-01

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829

  6. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  7. Structural Properties and Estimation of Delay Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H. S.

    1975-01-01

    Two areas in the theory of delay systems were studied: structural properties and their applications to feedback control, and optimal linear and nonlinear estimation. The concepts of controllability, stabilizability, observability, and detectability were investigated. The property of pointwise degeneracy of linear time-invariant delay systems is considered. Necessary and sufficient conditions for three dimensional linear systems to be made pointwise degenerate by delay feedback were obtained, while sufficient conditions for this to be possible are given for higher dimensional linear systems. These results were applied to obtain solvability conditions for the minimum time output zeroing control problem by delay feedback. A representation theorem is given for conditional moment functionals of general nonlinear stochastic delay systems, and stochastic differential equations are derived for conditional moment functionals satisfying certain smoothness properties.

  8. Exact hierarchical clustering in one dimension. [in universe

    NASA Technical Reports Server (NTRS)

    Williams, B. G.; Heavens, A. F.; Peacock, J. A.; Shandarin, S. F.

    1991-01-01

    The present adhesion model-based one-dimensional simulations of gravitational clustering have yielded bound-object catalogs applicable in tests of analytical approaches to cosmological structure formation. Attention is given to Press-Schechter (1974) type functions, as well as to their density peak-theory modifications and the two-point correlation function estimated from peak theory. The extent to which individual collapsed-object locations can be predicted by linear theory is significant only for objects of near-characteristic nonlinear mass.

  9. Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution

    DTIC Science & Technology

    2010-09-01

    12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial

  10. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    ERIC Educational Resources Information Center

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately…

  11. Causality Analysis of fMRI Data Based on the Directed Information Theory Framework.

    PubMed

    Wang, Zhe; Alahmadi, Ahmed; Zhu, David C; Li, Tongtong

    2016-05-01

    This paper aims to conduct fMRI-based causality analysis in brain connectivity by exploiting the directed information (DI) theory framework. Unlike the well-known Granger causality (GC) analysis, which relies on the linear prediction technique, the DI theory framework does not have any modeling constraints on the sequences to be evaluated and ensures estimation convergence. Moreover, it can be used to generate the GC graphs. In this paper, first, we introduce the core concepts in the DI framework. Second, we present how to conduct causality analysis using DI measures between two time series. We provide the detailed procedure on how to calculate the DI for two finite-time series. The two major steps involved here are optimal bin size selection for data digitization and probability estimation. Finally, we demonstrate the applicability of DI-based causality analysis using both the simulated data and experimental fMRI data, and compare the results with that of the GC analysis. Our analysis indicates that GC analysis is effective in detecting linear or nearly linear causal relationship, but may have difficulty in capturing nonlinear causal relationships. On the other hand, DI-based causality analysis is more effective in capturing both linear and nonlinear causal relationships. Moreover, it is observed that brain connectivity among different regions generally involves dynamic two-way information transmissions between them. Our results show that when bidirectional information flow is present, DI is more effective than GC to quantify the overall causal relationship.

  12. Control of AUVs using differential flatness theory and the derivative-free nonlinear Kalman Filter

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Raffo, Guilerme

    2015-12-01

    The paper proposes nonlinear control and filtering for Autonomous Underwater Vessels (AUVs) based on differential flatness theory and on the use of the Derivative-free nonlinear Kalman Filter. First, it is shown that the 6-DOF dynamic model of the AUV is a differentially flat one. This enables its transformation into the linear canonical (Brunovsky) form and facilitates the design of a state feedback controller. A problem that has to be dealt with is the uncertainty about the parameters of the AUV's dynamic model, as well the external perturbations which affect its motion. To cope with this, it is proposed to use a disturbance observer which is based on the Derivative-free nonlinear Kalman Filter. The considered filtering method consists of the standard Kalman Filter recursion applied on the linearized model of the vessel and of an inverse transformation based on differential flatness theory, which enables to obtain estimates of the state variables of the initial nonlinear model of the vessel. The Kalman Filter-based disturbance observer performs simultaneous estimation of the non-measurable state variables of the AUV and of the perturbation terms that affect its dynamics. By estimating such disturbances, their compensation is also succeeded through suitable modification of the feedback control input. The efficiency of the proposed AUV control and estimation scheme is confirmed through simulation experiments.

  13. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  14. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  15. On-line estimation of nonlinear physical systems

    USGS Publications Warehouse

    Christakos, G.

    1988-01-01

    Recursive algorithms for estimating states of nonlinear physical systems are presented. Orthogonality properties are rediscovered and the associated polynomials are used to linearize state and observation models of the underlying random processes. This requires some key hypotheses regarding the structure of these processes, which may then take account of a wide range of applications. The latter include streamflow forecasting, flood estimation, environmental protection, earthquake engineering, and mine planning. The proposed estimation algorithm may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. Moreover, the method has several advantages over nonrecursive estimators like disjunctive kriging. To link theory with practice, some numerical results for a simulated system are presented, in which responses from the proposed and extended Kalman algorithms are compared. ?? 1988 International Association for Mathematical Geology.

  16. Travel Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, Frank; Garrow, Dr. Laurie

    This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less

  17. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  18. Experimental investigation of gravity wave turbulence and of non-linear four wave interactions..

    NASA Astrophysics Data System (ADS)

    Berhanu, Michael

    2017-04-01

    Using the large basins of the Ecole Centrale de Nantes (France), non-linear interactions of gravity surface waves are experimentally investigated. In a first part we study statistical properties of a random wave field regarding the insights from the Wave Turbulence Theory. In particular freely decaying gravity wave turbulence is generated in a closed basin. No self-similar decay of the spectrum is observed, whereas its Fourier modes decay first as a time power law due to nonl-inear mechanisms, and then exponentially due to linear viscous damping. We estimate the linear, non-linear and dissipative time scales to test the time scale separation. By estimation of the mean energy flux from the initial decay of wave energy, the Kolmogorov-Zakharov constant of the weak turbulence theory is evaluated. In a second part, resonant interactions of oblique surface gravity waves in a large basin are studied. We generate two oblique waves crossing at an acute angle. These mother waves mutually interact and give birth to a resonant wave whose properties (growth rate, resonant response curve and phase locking) are fully characterized. All our experimental results are found in good quantitative agreement with four-wave interaction theory. L. Deike, B. Miquel, P. Gutiérrez, T. Jamin, B. Semin, M. Berhanu, E. Falcon and F. Bonnefoy, Role of the basin boundary conditions in gravity wave turbulence, Journal of Fluid Mechanics 781, 196 (2015) F. Bonnefoy, F. Haudin, G. Michel, B. Semin, T. Humbert, S. Aumaître, M. Berhanu and E. Falcon, Observation of resonant interactions among surface gravity waves, Journal of Fluid Mechanics (Rapids) 805, R3 (2016)

  19. Estimation of wing nonlinear aerodynamic characteristics at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Carlson, H. W.; Mack, R. J.

    1980-01-01

    A computational system for estimation of nonlinear aerodynamic characteristics of wings at supersonic speeds was developed and was incorporated in a computer program. This corrected linearized theory method accounts for nonlinearities in the variation of basic pressure loadings with local surface slopes, predicts the degree of attainment of theoretical leading edge thrust, and provides an estimate of detached leading edge vortex loadings that result when the theoretical thrust forces are not fully realized.

  20. Applications of Cosmological Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Christopherson, Adam J.

    2011-06-01

    Cosmological perturbation theory is crucial for our understanding of the universe. The linear theory has been well understood for some time, however developing and applying the theory beyond linear order is currently at the forefront of research in theoretical cosmology. This thesis studies the applications of perturbation theory to cosmology and, specifically, to the early universe. Starting with some background material introducing the well-tested 'standard model' of cosmology, we move on to develop the formalism for perturbation theory up to second order giving evolution equations for all types of scalar, vector and tensor perturbations, both in gauge dependent and gauge invariant form. We then move on to the main result of the thesis, showing that, at second order in perturbation theory, vorticity is sourced by a coupling term quadratic in energy density and entropy perturbations. This source term implies a qualitative difference to linear order. Thus, while at linear order vorticity decays with the expansion of the universe, the same is not true at higher orders. This will have important implications on future measurements of the polarisation of the Cosmic Microwave Background, and could give rise to the generation of a primordial seed magnetic field. Having derived this qualitative result, we then estimate the scale dependence and magnitude of the vorticity power spectrum, finding, for simple power law inputs a small, blue spectrum. The final part of this thesis concerns higher order perturbation theory, deriving, for the first time, the metric tensor, gauge transformation rules and governing equations for fully general third order perturbations. We close with a discussion of natural extensions to this work and other possible ideas for off-shooting projects in this continually growing field.

  1. Formulation of the linear model from the nonlinear simulation for the F18 HARV

    NASA Technical Reports Server (NTRS)

    Hall, Charles E., Jr.

    1991-01-01

    The F-18 HARV is a modified F-18 Aircraft which is capable of flying in the post-stall regime in order to achieve superagility. The onset of aerodynamic stall, and continued into the post-stall region, is characterized by nonlinearities in the aerodynamic coefficients. These aerodynamic coefficients are not expressed as analytic functions, but rather in the form of tabular data. The nonlinearities in the aerodynamic coefficients yield a nonlinear model of the aircraft's dynamics. Nonlinear system theory has made many advances, but this area is not sufficiently developed to allow its application to this problem, since many of the theorems are existance theorems and that the systems are composed of analytic functions. Thus, the feedback matrices and the state estimators are obtained from linear system theory techniques. It is important, in order to obtain the correct feedback matrices and state estimators, that the linear description of the nonlinear flight dynamics be as accurate as possible. A nonlinear simulation is run under the Advanced Continuous Simulation Language (ACSL). The ACSL simulation uses FORTRAN subroutines to interface to the look-up tables for the aerodynamic data. ACSL has commands to form the linear representation for the system. Other aspects of this investigation are discussed.

  2. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  3. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  4. Simple robust control laws for robot manipulators. Part 2: Adaptive case

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.; Wen, J. T.

    1987-01-01

    A new class of asymptotically stable adaptive control laws is introduced for application to the robotic manipulator. Unlike most applications of adaptive control theory to robotic manipulators, this analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and utilizes a parameterization based on physical (time-invariant) quantities. This approach is made possible by using energy-like Lyapunov functions which retain the nonlinear character and structure of the dynamics, rather than simple quadratic forms which are ubiquitous to the adaptive control literature, and which have bound the theory tightly to linear systems with unknown parameters. It is a unique feature of these results that the adaptive forms arise by straightforward certainty equivalence adaptation of their nonadaptive counterparts found in the companion to this paper (i.e., by replacing unknown quantities by their estimates) and that this simple approach leads to asymptotically stable closed-loop adaptive systems. Furthermore, it is emphasized that this approach does not require convergence of the parameter estimates (i.e., via persistent excitation), invertibility of the mass matrix estimate, or measurement of the joint accelerations.

  5. Item Response Theory and Health Outcomes Measurement in the 21st Century

    PubMed Central

    Hays, Ron D.; Morales, Leo S.; Reise, Steve P.

    2006-01-01

    Item response theory (IRT) has a number of potential advantages over classical test theory in assessing self-reported health outcomes. IRT models yield invariant item and latent trait estimates (within a linear transformation), standard errors conditional on trait level, and trait estimates anchored to item content. IRT also facilitates evaluation of differential item functioning, inclusion of items with different response formats in the same scale, and assessment of person fit and is ideally suited for implementing computer adaptive testing. Finally, IRT methods can be helpful in developing better health outcome measures and in assessing change over time. These issues are reviewed, along with a discussion of some of the methodological and practical challenges in applying IRT methods. PMID:10982088

  6. Information's role in the estimation of chaotic signals

    NASA Astrophysics Data System (ADS)

    Drake, Daniel Fred

    1998-11-01

    Researchers have proposed several methods designed to recover chaotic signals from noise-corrupted observations. While the methods vary, their qualitative performance does not: in low levels of noise all methods effectively recover the underlying signal; in high levels of noise no method can recover the underlying signal to any meaningful degree of accuracy. Of the methods proposed to date, all represent sub-optimal estimators. So: Is the inability to recover the signal in high noise levels simply a consequence of estimator sub-optimality? Or is estimator failure actually a manifestation of some intrinsic property of chaos itself? These questions are answered by deriving an optimal estimator for a class of chaotic systems and noting that it, too, fails in high levels of noise. An exact, closed- form expression for the estimator is obtained for a class of chaotic systems whose signals are solutions to a set of linear (but noncausal) difference equations. The existence of this linear description circumvents the difficulties normally encountered when manipulating the nonlinear (but causal) expressions that govern. chaotic behavior. The reason why even the optimal estimator fails to recover underlying chaotic signals in high levels of noise has its roots in information theory. At such noise levels, the mutual information linking the corrupted observations to the underlying signal is essentially nil, reducing the estimator to a simple guessing strategy based solely on a priori statistics. Entropy, long the common bond between information theory and dynamical systems, is actually one aspect of a far more complete characterization of information sources: the rate distortion function. Determining the rate distortion function associated with the class of chaotic systems considered in this work provides bounds on estimator performance in high levels of noise. Finally, a slight modification of the linear description leads to a method of synthesizing on limited precision platforms ``pseudo-chaotic'' sequences that mimic true chaotic behavior to any finite degree of precision and duration. The use of such a technique in spread-spectrum communications is considered.

  7. An approximation theory for nonlinear partial differential equations with applications to identification and control

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Kunisch, K.

    1982-01-01

    Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.

  8. Reduced state feedback gain computation. [optimization and control theory for aircraft control

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.

  9. Coherent multiscale image processing using dual-tree quaternion wavelets.

    PubMed

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  10. A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Grunberg, D. B.

    1986-01-01

    A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.

  11. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  12. Functional Effects of Parasites on Food Web Properties during the Spring Diatom Bloom in Lake Pavin: A Linear Inverse Modeling Analysis

    PubMed Central

    Niquil, Nathalie; Jobard, Marlène; Saint-Béat, Blanche; Sime-Ngando, Télesphore

    2011-01-01

    This study is the first assessment of the quantitative impact of parasitic chytrids on a planktonic food web. We used a carbon-based food web model of Lake Pavin (Massif Central, France) to investigate the effects of chytrids during the spring diatom bloom by developing models with and without chytrids. Linear inverse modelling procedures were employed to estimate undetermined flows in the lake. The Monte Carlo Markov chain linear inverse modelling procedure provided estimates of the ranges of model-derived fluxes. Model results support recent theories on the probable impact of parasites on food web function. In the lake, during spring, when ‘inedible’ algae (unexploited by planktonic herbivores) were the dominant primary producers, the epidemic growth of chytrids significantly reduced the sedimentation loss of algal carbon to the detritus pool through the production of grazer-exploitable zoospores. We also review some theories about the potential influence of parasites on ecological network properties and argue that parasitism contributes to longer carbon path lengths, higher levels of activity and specialization, and lower recycling. Considering the “structural asymmetry” hypothesis as a stabilizing pattern, chytrids should contribute to the stability of aquatic food webs. PMID:21887240

  13. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less

  14. The variance of the locally measured Hubble parameter explained with different estimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk

    We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less

  15. A study of hypersonic small-disturbance theory

    NASA Technical Reports Server (NTRS)

    Van Dyke, Milton D

    1954-01-01

    A systematic study is made of the approximate inviscid theory of thin bodies moving at such high supersonic speeds that nonlinearity is an essential feature of the equations of flow. The first-order small-disturbance equations are derived for three-dimensional motions involving shock waves, and estimates are obtained for the order of error involved in the approximation. The hypersonic similarity rule of Tsien and Hayes, and Hayes' unsteady analogy appear in the course of the development. It is shown that the hypersonic theory can be interpreted so that it applies also in the range of linearized supersonic flow theory. Several examples are solved according to the small-disturbance theory, and compared with the full solutions when available.

  16. A numerical method for the prediction of high-speed boundary-layer transition using linear theory

    NASA Technical Reports Server (NTRS)

    Mack, L. M.

    1975-01-01

    A method is described of estimating the location of transition in an arbitrary laminar boundary layer on the basis of linear stability theory. After an examination of experimental evidence for the relation between linear stability theory and transition, a discussion is given of the three essential elements of a transition calculation: (1) the interaction of the external disturbances with the boundary layer; (2) the growth of the disturbances in the boundary layer; and (3) a transition criterion. The computer program which carried out these three calculations is described. The program is first tested by calculating the effect of free-stream turbulence on the transition of the Blasius boundary layer, and is then applied to the problem of transition in a supersonic wind tunnel. The effects of unit Reynolds number and Mach number on the transition of an insulated flat-plate boundary layer are calculated on the basis of experimental data on the intensity and spectrum of free-stream disturbances. Reasonable agreement with experiment is obtained in the Mach number range from 2 to 4.5.

  17. Annual Review of Research Under the Joint Services Electronics Program.

    DTIC Science & Technology

    1983-12-01

    Total Number of Professionals: PI 2 RA 2 (1/2 time ) 6. Sunmmary: Our research into the theory of nonlinear control systems and appli- * cations to...known that all linear time -invariant controllable systems can be transformed to Brunovsky canonical form by a transformation consist- ing only of...estimating the impulse response ( = transfer matrix) of a discrete- time linear system x(t+l) = Fx(t) + Gu(t) y(t) = Hx(t) from a finite set of finite

  18. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  19. Four-Component Damped Density Functional Response Theory Study of UV/Vis Absorption Spectra and Phosphorescence Parameters of Group 12 Metal-Substituted Porphyrins.

    PubMed

    Fransson, Thomas; Saue, Trond; Norman, Patrick

    2016-05-10

    The influences of group 12 (Zn, Cd, Hg) metal-substitution on the valence spectra and phosphorescence parameters of porphyrins (P) have been investigated in a relativistic setting. In order to obtain valence spectra, this study reports the first application of the damped linear response function, or complex polarization propagator, in the four-component density functional theory framework [as formulated in Villaume et al. J. Chem. Phys. 2010 , 133 , 064105 ]. It is shown that the steep increase in the density of states as due to the inclusion of spin-orbit coupling yields only minor changes in overall computational costs involved with the solution of the set of linear response equations. Comparing single-frequency to multifrequency spectral calculations, it is noted that the number of iterations in the iterative linear equation solver per frequency grid-point decreases monotonously from 30 to 0.74 as the number of frequency points goes from one to 19. The main heavy-atom effect on the UV/vis-absorption spectra is indirect and attributed to the change of point group symmetry due to metal-substitution, and it is noted that substitutions using heavier atoms yield small red-shifts of the intense Soret-band. Concerning phosphorescence parameters, the adoption of a four-component relativistic setting enables the calculation of such properties at a linear order of response theory, and any higher-order response functions do not need to be considered-a real, conventional, form of linear response theory has been used for the calculation of these parameters. For the substituted porphyrins, electronic coupling between the lowest triplet states is strong and results in theoretical estimates of lifetimes that are sensitive to the wave function and electron density parametrization. With this in mind, we report our best estimates of the phosphorescence lifetimes to be 460, 13.8, 11.2, and 0.00155 s for H2P, ZnP, CdP, and HgP, respectively, with the corresponding transition energies being equal to 1.46, 1.50, 1.38, and 0.89 eV.

  20. Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey

    2015-12-01

    The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.

  1. Angular scale expansion theory and the misperception of egocentric distance in locomotor space.

    PubMed

    Durgin, Frank H

    Perception is crucial for the control of action, but perception need not be scaled accurately to produce accurate actions. This paper reviews evidence for an elegant new theory of locomotor space perception that is based on the dense coding of angular declination so that action control may be guided by richer feedback. The theory accounts for why so much direct-estimation data suggests that egocentric distance is underestimated despite the fact that action measures have been interpreted as indicating accurate perception. Actions are calibrated to the perceived scale of space and thus action measures are typically unable to distinguish systematic (e.g., linearly scaled) misperception from accurate perception. Whereas subjective reports of the scaling of linear extent are difficult to evaluate in absolute terms, study of the scaling of perceived angles (which exist in a known scale, delimited by vertical and horizontal) provides new evidence regarding the perceptual scaling of locomotor space.

  2. The computation of induced drag with nonplanar and deformed wakes

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Smith, Stephen

    1991-01-01

    The classical calculation of inviscid drag, based on far field flow properties, is reexamined with particular attention to the nonlinear effects of wake roll-up. Based on a detailed look at nonlinear, inviscid flow theory, it is concluded that many of the classical, linear results are more general than might have been expected. Departures from the linear theory are identified and design implications are discussed. Results include the following: Wake deformation has little effect on the induced drag of a single element wing, but introduces first order corrections to the induced drag of a multi-element lifting system. Far field Trefftz-plane analysis may be used to estimate the induced drag of lifting systems, even when wake roll-up is considered, but numerical difficulties arise. The implications of several other approximations made in lifting line theory are evaluated by comparison with more refined analyses.

  3. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  4. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  5. The cross-over to magnetostrophic convection in planetary dynamo systems

    PubMed Central

    King, E. M.

    2017-01-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ, yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, LX≈(Λo2/Rmo)D, where Λo is the linear (or traditional) Elsasser number, Rmo is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above LX, magnetostrophic convection dynamics should not be possible. Only on scales smaller than LX should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because LX is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λo≃1 and Rmo≃103 in Earth’s core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations. PMID:28413338

  6. The cross-over to magnetostrophic convection in planetary dynamo systems.

    PubMed

    Aurnou, J M; King, E M

    2017-03-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ , yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, [Formula: see text], where Λ o is the linear (or traditional) Elsasser number, Rm o is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above [Formula: see text], magnetostrophic convection dynamics should not be possible. Only on scales smaller than [Formula: see text] should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because [Formula: see text] is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λ o ≃1 and Rm o ≃10 3 in Earth's core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations.

  7. Neural Decoding and "Inner" Psychophysics: A Distance-to-Bound Approach for Linking Mind, Brain, and Behavior.

    PubMed

    Ritchie, J Brendan; Carlson, Thomas A

    2016-01-01

    A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.

  8. Confirmation of linear system theory prediction: Rate of change of Herrnstein's kappa as a function of response-force requirement.

    PubMed

    McDowell, J J; Wood, H M

    1985-01-01

    Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes ( cent/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's kappa were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) kappa increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of kappa was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of kappa was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's kappa.

  9. Confirmation of linear system theory prediction: Rate of change of Herrnstein's κ as a function of response-force requirement

    PubMed Central

    McDowell, J. J; Wood, Helena M.

    1985-01-01

    Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes (¢/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's κ were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) κ increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of κ was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of κ was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's κ. PMID:16812408

  10. Transport coefficients of hard-sphere mixtures. II. Diameter ratio 0. 4 and mass ratio 0. 03 at low density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erpenbeck, J.J.

    1992-02-15

    The transport coefficients of shear viscosity, thermal conductivity, thermal diffusion, and mutual diffusion are estimated for a binary, equimolar mixture of hard spheres having a diameter ratio of 0.4 and a mass ratio of 0.03 at volumes of 5{ital V}{sub 0}, 10{ital V}{sub 0}, and 20{ital V}{sub 0} (where {ital V}{sub 0}=1/2 {radical}2 {ital N} {ital tsum}{sub {ital a}} x{sub {ital a}}{sigma}{sub {ital a}}{sup 3}, {ital x}{sub {ital a}} are mole fractions, {sigma}{sub {ital a}} are diameters, and {ital N} is the number of particles) through Monte Carlo, molecular-dynamics calculations using the Green-Kubo formulas. Calculations are reported for as fewmore » as 108 and as many as 4000 particles, but not for each value of the volume. Both finite-system and long-time-tail corrections are applied to obtain estimates of the transport coefficients in the thermodynamic limit; corrections of both types are found to be small. The results are compared with the predictions of the revised Enskog theory and the linear density corrections to that theory are reported. The mean free time is also computed as a function of density and the linear and quadratic corrections to the Boltzmann theory are estimated. The mean free time is also compared with the expression from the Mansoori-Carnahan-Starling-Leland equation of state.« less

  11. Dynamic Stability Analysis of Linear Time-varying Systems via an Extended Modal Identification Approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhisai; Liu, Li; Zhou, Sida; Naets, Frank; Heylen, Ward; Desmet, Wim

    2017-03-01

    The problem of linear time-varying(LTV) system modal analysis is considered based on time-dependent state space representations, as classical modal analysis of linear time-invariant systems and current LTV system modal analysis under the "frozen-time" assumption are not able to determine the dynamic stability of LTV systems. Time-dependent state space representations of LTV systems are first introduced, and the corresponding modal analysis theories are subsequently presented via a stability-preserving state transformation. The time-varying modes of LTV systems are extended in terms of uniqueness, and are further interpreted to determine the system's stability. An extended modal identification is proposed to estimate the time-varying modes, consisting of the estimation of the state transition matrix via a subspace-based method and the extraction of the time-varying modes by the QR decomposition. The proposed approach is numerically validated by three numerical cases, and is experimentally validated by a coupled moving-mass simply supported beam experimental case. The proposed approach is capable of accurately estimating the time-varying modes, and provides a new way to determine the dynamic stability of LTV systems by using the estimated time-varying modes.

  12. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  13. Functional differentiability in time-dependent quantum mechanics.

    PubMed

    Penz, Markus; Ruggenthaler, Michael

    2015-03-28

    In this work, we investigate the functional differentiability of the time-dependent many-body wave function and of derived quantities with respect to time-dependent potentials. For properly chosen Banach spaces of potentials and wave functions, Fréchet differentiability is proven. From this follows an estimate for the difference of two solutions to the time-dependent Schrödinger equation that evolve under the influence of different potentials. Such results can be applied directly to the one-particle density and to bounded operators, and present a rigorous formulation of non-equilibrium linear-response theory where the usual Lehmann representation of the linear-response kernel is not valid. Further, the Fréchet differentiability of the wave function provides a new route towards proving basic properties of time-dependent density-functional theory.

  14. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  15. Minimizing bias in biomass allometry: Model selection and log transformation of data

    Treesearch

    Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer

    2011-01-01

    Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....

  16. Collisionless kinetic theory of oblique tearing instabilities

    DOE PAGES

    Baalrud, S. D.; Bhattacharjee, A.; Daughton, W.

    2018-02-15

    The linear dispersion relation for collisionless kinetic tearing instabilities is calculated for the Harris equilibrium. In contrast to the conventional 2D geometry, which considers only modes at the center of the current sheet, modes can span the current sheet in 3D. Modes at each resonant surface have a unique angle with respect to the guide field direction. Both kinetic simulations and numerical eigenmode solutions of the linearized Vlasov-Maxwell equations have recently revealed that standard analytic theories vastly overestimate the growth rate of oblique modes. In this paper, we find that this stabilization is associated with the density-gradient-driven diamagnetic drift. Themore » analytic theories miss this drift stabilization because the inner tearing layer broadens at oblique angles sufficiently far that the assumption of scale separation between the inner and outer regions of boundary-layer theory breaks down. The dispersion relation obtained by numerically solving a single second order differential equation is found to approximately capture the drift stabilization predicted by solutions of the full integro-differential eigenvalue problem. Finally, a simple analytic estimate for the stability criterion is provided.« less

  17. Collisionless kinetic theory of oblique tearing instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baalrud, S. D.; Bhattacharjee, A.; Daughton, W.

    The linear dispersion relation for collisionless kinetic tearing instabilities is calculated for the Harris equilibrium. In contrast to the conventional 2D geometry, which considers only modes at the center of the current sheet, modes can span the current sheet in 3D. Modes at each resonant surface have a unique angle with respect to the guide field direction. Both kinetic simulations and numerical eigenmode solutions of the linearized Vlasov-Maxwell equations have recently revealed that standard analytic theories vastly overestimate the growth rate of oblique modes. In this paper, we find that this stabilization is associated with the density-gradient-driven diamagnetic drift. Themore » analytic theories miss this drift stabilization because the inner tearing layer broadens at oblique angles sufficiently far that the assumption of scale separation between the inner and outer regions of boundary-layer theory breaks down. The dispersion relation obtained by numerically solving a single second order differential equation is found to approximately capture the drift stabilization predicted by solutions of the full integro-differential eigenvalue problem. Finally, a simple analytic estimate for the stability criterion is provided.« less

  18. Collisionless kinetic theory of oblique tearing instabilities

    NASA Astrophysics Data System (ADS)

    Baalrud, S. D.; Bhattacharjee, A.; Daughton, W.

    2018-02-01

    The linear dispersion relation for collisionless kinetic tearing instabilities is calculated for the Harris equilibrium. In contrast to the conventional 2D geometry, which considers only modes at the center of the current sheet, modes can span the current sheet in 3D. Modes at each resonant surface have a unique angle with respect to the guide field direction. Both kinetic simulations and numerical eigenmode solutions of the linearized Vlasov-Maxwell equations have recently revealed that standard analytic theories vastly overestimate the growth rate of oblique modes. We find that this stabilization is associated with the density-gradient-driven diamagnetic drift. The analytic theories miss this drift stabilization because the inner tearing layer broadens at oblique angles sufficiently far that the assumption of scale separation between the inner and outer regions of boundary-layer theory breaks down. The dispersion relation obtained by numerically solving a single second order differential equation is found to approximately capture the drift stabilization predicted by solutions of the full integro-differential eigenvalue problem. A simple analytic estimate for the stability criterion is provided.

  19. Scaling of Perceptual Errors Can Predict the Shape of Neural Tuning Curves

    NASA Astrophysics Data System (ADS)

    Shouval, Harel Z.; Agarwal, Animesh; Gavornik, Jeffrey P.

    2013-04-01

    Weber’s law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber’s law remains unknown. This work presents a simple theory explaining the conditions under which Weber’s law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber’s law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber’s law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber’s law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber’s law and may represent a general governing principle relating perception to neural activity.

  20. Computation of output feedback gains for linear stochastic systems using the Zangwill-Powell method

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1977-01-01

    Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell.

  1. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  2. Mathematical Techniques for Nonlinear System Theory.

    DTIC Science & Technology

    1981-09-01

    This report deals with research results obtained in the following areas: (1) Finite-dimensional linear system theory by algebraic methods--linear...Infinite-dimensional linear systems--realization theory of infinite-dimensional linear systems; (3) Nonlinear system theory --basic properties of

  3. Blind identification of nonlinear models with non-Gaussian inputs

    NASA Astrophysics Data System (ADS)

    Prakriya, Shankar; Pasupathy, Subbarayan; Hatzinakos, Dimitrios

    1995-12-01

    Some methods are proposed for the blind identification of finite-order discrete-time nonlinear models with non-Gaussian circular inputs. The nonlinear models consist of two finite memory linear time invariant (LTI) filters separated by a zero-memory nonlinearity (ZMNL) of the polynomial type (the LTI-ZMNL-LTI models). The linear subsystems are allowed to be of non-minimum phase (NMP). The methods base their estimates of the impulse responses on slices of the N plus 1th order polyspectra of the output sequence. It is shown that the identification of LTI-ZMNL systems requires only a 1-D moment or polyspectral slice. The coefficients of the ZMNL are not estimated, and need not be known. The order of the nonlinearity can, in theory, be estimated from the received signal. These methods possess several noise and interference suppression characteristics, and have applications in modeling nonlinearly amplified QAM/QPSK signals in digital satellite and microwave communications.

  4. Within crown variation in the relationship between foliage biomass and sapwood area in jack pine.

    PubMed

    Schneider, Robert; Berninger, Frank; Ung, Chhun-Huor; Mäkelä, Annikki; Swift, D Edwin; Zhang, S Y

    2011-01-01

    The relationship between sapwood area and foliage biomass is the basis for a lot of research on eco-phyisology. In this paper, foliage biomass change between two consecutive whorls is studied, using different variations in the pipe model theory. Linear and non-linear mixed-effect models relating foliage differences to sapwood area increments were tested to take into account whorl location, with the best fit statistics supporting the non-linear formulation. The estimated value of the exponent is 0.5130, which is significantly different from 1, the expected value given by the pipe model theory. When applied to crown stem sapwood taper, the model indicates that foliage biomass distribution influences the foliage biomass to sapwood area at crown base ratio. This result is interpreted as being the consequence of differences in the turnover rates of sapwood and foliage. More importantly, the model explains previously reported trends in jack pine sapwood area at crown base to tree foliage biomass ratio.

  5. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  6. Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis.

    PubMed

    Liu, Jeng-Cheng; Cheng, Yuang-Tung; Hung, Hsien-Sen

    2018-01-19

    Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. The structure of this ULA based on micro-electro-mechanical systems (MEMS) technology, and thus has attractive features of small size, high sensitivity and low cost, and is suitable for Autonomous Underwater Vehicle (AUV) operations. This proposed target localization method has the following advantages: only a single snapshot of data is needed and real-time processing is feasible. The proposed algorithm transforms a very complicated nonlinear estimation problem to a simple nearly linear one via time-frequency distribution (TFD) theory and is verified with HHT. Theoretical discussions of resolution issue are also provided to facilitate the design of a MEMS sensor with high sensitivity. Simulation results are shown to verify the effectiveness of the proposed method.

  7. A general theory of intertemporal decision-making and the perception of time.

    PubMed

    Namboodiri, Vijay M K; Mihalas, Stefan; Marton, Tanya M; Hussain Shuler, Marshall G

    2014-01-01

    Animals and humans make decisions based on their expected outcomes. Since relevant outcomes are often delayed, perceiving delays and choosing between earlier vs. later rewards (intertemporal decision-making) is an essential component of animal behavior. The myriad observations made in experiments studying intertemporal decision-making and time perception have not yet been rationalized within a single theory. Here we present a theory-Training-Integrated Maximized Estimation of Reinforcement Rate (TIMERR)-that explains a wide variety of behavioral observations made in intertemporal decision-making and the perception of time. Our theory postulates that animals make intertemporal choices to optimize expected reward rates over a limited temporal window which includes a past integration interval-over which experienced reward rate is estimated-as well as the expected delay to future reward. Using this theory, we derive mathematical expressions for both the subjective value of a delayed reward and the subjective representation of the delay. A unique contribution of our work is in finding that the past integration interval directly determines the steepness of temporal discounting and the non-linearity of time perception. In so doing, our theory provides a single framework to understand both intertemporal decision-making and time perception.

  8. Control of the maneuvering SCOLE structure

    NASA Technical Reports Server (NTRS)

    Lim, S.; Meirovitch, L.

    1992-01-01

    This paper is concerned with the vibration control of the SCOLE structure while it undergoes a slewing maneuver. The control law is designed according to the linear quadratic regulator theory. In view of saturation limits on the actuators, the actual implementation is modified so as to observe these limits, resulting in suboptimal control. State estimation is carried out by means of a Kalman filter. The control and state estimation are carried out in discrete time. Numerical simulations for several cases of interest are presented.

  9. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  10. Ridge Regression Signal Processing

    NASA Technical Reports Server (NTRS)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  11. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  12. Controlling Flexible Manipulators, an Experimental Investigation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hastings, Gordon Greene

    1986-01-01

    Lightweight, slender manipulators offer faster response and/or greater workspace range for the same size actuators than tradional manipulators. Lightweight construction of manipulator links results in increased structural flexibility. The increase flexibility must be considered in the design of control systems to properly account for the dynamic flexible vibrations and static deflections. Real time control of the flexible manipulator vibrations are experimentally investigated. Models intended for real-time control of distributed parameter system such as flexible manipulators rely on model approximation schemes. An linear model based on the application of Lagrangian dynamics to a rigid body mode and a series of separable flexible modes is examined with respect to model order requirements, and modal candidate selection. Balanced realizations are applied to the linear flexible model to obtain an estimate of appropriate order for a selected model. Describing the flexible deflections as a linear combination of modes results in measurements of beam state, which yield information about several modes. To realize the potential of linear systems theory, knowledge of each state must be available. State estimation is also accomplished by implementation of a Kalman Filter. State feedback control laws are implemented based upon linear quadratic regulator design.

  13. Refined Zigzag Theory for Laminated Composite and Sandwich Plates

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco

    2009-01-01

    A refined zigzag theory is presented for laminated-composite and sandwich plates that includes the kinematics of first-order shear deformation theory as its baseline. The theory is variationally consistent and is derived from the virtual work principle. Novel piecewise-linear zigzag functions that provide a more realistic representation of the deformation states of transverse-shear-flexible plates than other similar theories are used. The formulation does not enforce full continuity of the transverse shear stresses across the plate s thickness, yet is robust. Transverse-shear correction factors are not required to yield accurate results. The theory is devoid of the shortcomings inherent in the previous zigzag theories including shear-force inconsistency and difficulties in simulating clamped boundary conditions, which have greatly limited the accuracy of these theories. This new theory requires only C(sup 0)-continuous kinematic approximations and is perfectly suited for developing computationally efficient finite elements. The theory should be useful for obtaining relatively efficient, accurate estimates of structural response needed to design high-performance load-bearing aerospace structures.

  14. Using a Linear Regression Method to Detect Outliers in IRT Common Item Equating

    ERIC Educational Resources Information Center

    He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei

    2013-01-01

    Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…

  15. A comparison of the two approaches of the theory of critical distances based on linear-elastic and elasto-plastic analyses

    NASA Astrophysics Data System (ADS)

    Terekhina, A. I.; Plekhov, O. A.; Kostina, A. A.; Susmel, L.

    2017-06-01

    The problem of determining the strength of engineering structures, considering the effects of the non-local fracture in the area of stress concentrators is a great scientific and industrial interest. This work is aimed on modification of the classical theory of critical distance that is known as a method of failure prediction based on linear-elastic analysis in case of elasto-plastic material behaviour to improve the accuracy of estimation of lifetime of notched components. Accounting plasticity has been implemented with the use of the Simplified Johnson-Cook model. Mechanical tests were carried out using a 300 kN electromechanical testing machine Shimadzu AG-X Plus. The cylindrical un-notched specimens and specimens with stress concentrators of titanium alloy Grade2 were tested under tensile loading with different grippers travel speed, which ensured several orders of strain rate. The results of elasto-plastic analyses of stress distributions near a wide variety of notches are presented. The results showed that the use of the modification of the TCD based on elasto-plastic analysis gives us estimates falling within an error interval of ±5-10%, that more accurate predictions than the linear elastic TCD solution. The use of an improved description of the stress-strain state at the notch tip allows introducing the critical distances as a material parameter.

  16. Investigation, development, and application of optimal output feedback theory. Volume 3: The relationship between dynamic compensators and observers and Kalman filters

    NASA Technical Reports Server (NTRS)

    Broussard, John R.

    1987-01-01

    Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.

  17. A modular approach for item response theory modeling with the R package flirt.

    PubMed

    Jeon, Minjeong; Rijmen, Frank

    2016-06-01

    The new R package flirt is introduced for flexible item response theory (IRT) modeling of psychological, educational, and behavior assessment data. flirt integrates a generalized linear and nonlinear mixed modeling framework with graphical model theory. The graphical model framework allows for efficient maximum likelihood estimation. The key feature of flirt is its modular approach to facilitate convenient and flexible model specifications. Researchers can construct customized IRT models by simply selecting various modeling modules, such as parametric forms, number of dimensions, item and person covariates, person groups, link functions, etc. In this paper, we describe major features of flirt and provide examples to illustrate how flirt works in practice.

  18. Computation of linear acceleration through an internal model in the macaque cerebellum

    PubMed Central

    Laurens, Jean; Meng, Hui; Angelaki, Dora E.

    2013-01-01

    A combination of theory and behavioral findings has supported a role for internal models in the resolution of sensory ambiguities and sensorimotor processing. Although the cerebellum has been proposed as a candidate for implementation of internal models, concrete evidence from neural responses is lacking. Here we exploit un-natural motion stimuli, which induce incorrect self-motion perception and eye movements, to explore the neural correlates of an internal model proposed to compensate for Einstein’s equivalence principle and generate neural estimates of linear acceleration and gravity. We show that caudal cerebellar vermis Purkinje cells and cerebellar nuclei neurons selective for actual linear acceleration also encode erroneous linear acceleration, as expected from the internal model hypothesis, even when no actual linear acceleration occurs. These findings provide strong evidence that the cerebellum might be involved in the implementation of internal models that mimic physical principles to interpret sensory signals, as previously hypothesized by theorists. PMID:24077562

  19. Thermoelectric DC conductivities in hyperscaling violating Lifshitz theories

    NASA Astrophysics Data System (ADS)

    Cremonini, Sera; Cvetič, Mirjam; Papadimitriou, Ioannis

    2018-04-01

    We analytically compute the thermoelectric conductivities at zero frequency (DC) in the holographic dual of a four dimensional Einstein-Maxwell-Axion-Dilaton theory that admits a class of asymptotically hyperscaling violating Lifshitz backgrounds with a dynamical exponent z and hyperscaling violating parameter θ. We show that the heat current in the dual Lifshitz theory involves the energy flux, which is an irrelevant operator for z > 1. The linearized fluctuations relevant for computing the thermoelectric conductivities turn on a source for this irrelevant operator, leading to several novel and non-trivial aspects in the holographic renormalization procedure and the identification of the physical observables in the dual theory. Moreover, imposing Dirichlet or Neumann boundary conditions on the spatial components of one of the two Maxwell fields present leads to different thermoelectric conductivities. Dirichlet boundary conditions reproduce the thermoelectric DC conductivities obtained from the near horizon analysis of Donos and Gauntlett, while Neumann boundary conditions result in a new set of DC conductivities. We make preliminary analytical estimates for the temperature behavior of the thermoelectric matrix in appropriate regions of parameter space. In particular, at large temperatures we find that the only case which could lead to a linear resistivity ρ ˜ T corresponds to z = 4 /3.

  20. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  1. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  2. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  3. Formation factor in Bentheimer and Fontainebleau sandstones: Theory compared with pore-scale numerical simulations

    NASA Astrophysics Data System (ADS)

    Ghanbarian, Behzad; Berg, Carl F.

    2017-09-01

    Accurate quantification of formation resistivity factor F (also called formation factor) provides useful insight into connectivity and pore space topology in fully saturated porous media. In particular the formation factor has been extensively used to estimate permeability in reservoir rocks. One of the widely applied models to estimate F is Archie's law (F = ϕ- m in which ϕ is total porosity and m is cementation exponent) that is known to be valid in rocks with negligible clay content, such as clean sandstones. In this study we compare formation factors determined by percolation and effective-medium theories as well as Archie's law with numerical simulations of electrical resistivity on digital rock models. These digital models represent Bentheimer and Fontainebleau sandstones and are derived either by reconstruction or directly from micro-tomographic images. Results show that the universal quadratic power law from percolation theory accurately estimates the calculated formation factor values in network models over the entire range of porosity. However, it crosses over to the linear scaling from the effective-medium approximation at the porosity of 0.75 in grid models. We also show that the effect of critical porosity, disregarded in Archie's law, is nontrivial, and the Archie model inaccurately estimates the formation factor in low-porosity homogeneous sandstones.

  4. Effect of upstream ULF waves on the energetic ion diffusion at the earth's foreshock: Theory, Simulation, and Observations

    NASA Astrophysics Data System (ADS)

    Otsuka, F.; Matsukiyo, S.; Kis, A.; Hada, T.

    2017-12-01

    Spatial diffusion of energetic particles is an important problem not only from a fundamental physics point of view but also for its application to particle acceleration processes at astrophysical shocks. Quasi-linear theory can provide the spatial diffusion coefficient as a function of the wave turbulence spectrum. By assuming a simple power-law spectrum for the turbulence, the theory has been successfully applied to diffusion and acceleration of cosmic rays in the interplanetary and interstellar medium. Near the earth's foreshock, however, the wave spectrum often has an intense peak, presumably corresponding to the upstream ULF waves generated by the field-aligned beam (FAB). In this presentation, we numerically and theoretically discuss how the intense ULF peak in the wave spectrum modifies the spatial parallel diffusion of energetic ions. The turbulence is given as a superposition of non-propagating transverse MHD waves in the solar wind rest frame, and its spectrum is composed of a piecewise power-law spectrum with different power-law indices. The diffusion coefficients are then estimated by using the quasi-linear theory and test particle simulations. We find that the presence of the ULF peak produces a concave shape of the diffusion coefficient when it is plotted versus the ion energy. The results above are used to discuss the Cluster observations of the diffuse ions at the Earth's foreshock. Using the density gradients of the energetic ions detected by the Cluster spacecraft, we determine the e-folding distances, equivalently, the spatial diffusion coefficients, of ions with their energies from 10 to 32 keV. The observed e-folding distances are significantly smaller than those estimated in the past statistical studies. This suggests that the particle acceleration at the foreshock can be more efficient than considered before. Our test particle simulation explains well the small estimate of the e-folding distances, by using the observed wave turbulence spectrum near the shock.

  5. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  6. Acoustic pressures emanating from a turbomachine stage

    NASA Technical Reports Server (NTRS)

    Ramachandra, S. M.

    1984-01-01

    A knowledge of the acoustic energy emission of each blade row of a turbomachine is useful for estimating the overall noise level of the machine and for determining its discrete frequency noise content. Because of the close spacing between the rotor and stator of a compressor stage, the strong aerodynamic interactions between them have to be included in obtaining the resultant flow field. A three dimensional theory for determining the discrete frequency noise content of an axial compressor consisting of a rotor and a stator each with a finite number of blades are outlined. The lifting surface theory and the linearized equation of an ideal, nonsteady compressible fluid motion are used for thin blades of arbitrary cross section. The combined pressure field at a point of the fluid is constructed by linear addition of the rotor and stator solutions together with an interference factor obtained by matching them for net zero vorticity behind the stage.

  7. Application of the variational-asymptotical method to composite plates

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.

    1992-01-01

    A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.

  8. Stokes paradox in electronic Fermi liquids

    NASA Astrophysics Data System (ADS)

    Lucas, Andrew

    2017-03-01

    The Stokes paradox is the statement that in a viscous two-dimensional fluid, the "linear response" problem of fluid flow around an obstacle is ill posed. We present a simple consequence of this paradox in the hydrodynamic regime of a Fermi liquid of electrons in two-dimensional metals. Using hydrodynamics and kinetic theory, we estimate the contribution of a single cylindrical obstacle to the global electrical resistance of a material, within linear response. Momentum relaxation, present in any realistic electron liquid, resolves the classical paradox. Nonetheless, this paradox imprints itself in the resistance, which can be parametrically larger than predicted by Ohmic transport theory. We find a remarkably rich set of behaviors, depending on whether or not the quasiparticle dynamics in the Fermi liquid should be treated as diffusive, hydrodynamic, or ballistic on the length scale of the obstacle. We argue that all three types of behavior are observable in present day experiments.

  9. Thermal stresses due to cooling of a viscoelastic oceanic lithosphere

    USGS Publications Warehouse

    Denlinger, R.P.; Savage, W.Z.

    1989-01-01

    Instant-freezing methods inaccurately predict transient thermal stresses in rapidly cooling silicate glass plates because of the temperature dependent rheology of the material. The temperature dependent rheology of the lithosphere may affect the transient thermal stress distribution in a similar way, and for this reason we use a thermoviscoelastic model to estimate thermal stresses in young oceanic lithosphere. This theory is formulated here for linear creep processes that have an Arrhenius rate dependence on temperature. Our results show that the stress differences between instant freezing and linear thermoviscoelastic theory are most pronounced at early times (0-20 m.y. when the instant freezing stresses may be twice as large. The solutions for the two methods asymptotically approach the same solution with time. A comparison with intraplate seismicity shows that both methods underestimate the depth of compressional stresses inferred from the seismicity in a systematic way. -from Authors

  10. Linearization instability for generic gravity in AdS spacetime

    NASA Astrophysics Data System (ADS)

    Altas, Emel; Tekin, Bayram

    2018-01-01

    In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.

  11. Monte Carlo simulations of dipolar and quadrupolar linear Kihara fluids. A test of thermodynamic perturbation theory

    NASA Astrophysics Data System (ADS)

    Garzon, B.

    Several simulations of dipolar and quadrupolar linear Kihara fluids using the Monte Carlo method in the canonical ensemble have been performed. Pressure and internal energy have been directly determined from simulations and Helmholtz free energy using thermodynamic integration. Simulations were carried out for fluids of fixed elongation at two different densities and several values of temperature and dipolar or quadrupolar moment for each density. Results are compared with the perturbation theory developed by Boublik for this same type of fluid and good agreement between simulated and theoretical values was obtained especially for quadrupole fluids. Simulations are also used to obtain the liquid structure giving the first few coefficients of the expansion of pair correlation functions in terms of spherical harmonics. Estimations of the triple point temperature to critical temperature ratio are given for some dipole and quadrupole linear fluids. The stability range of the liquid phase of these substances is shortly discussed and an analysis about the opposite roles of the dipole moment and the molecular elongation on this stability is also given.

  12. Tackling non-linearities with the effective field theory of dark energy and modified gravity

    NASA Astrophysics Data System (ADS)

    Frusciante, Noemi; Papadomanolakis, Georgios

    2017-12-01

    We present the extension of the effective field theory framework to the mildly non-linear scales. The effective field theory approach has been successfully applied to the late time cosmic acceleration phenomenon and it has been shown to be a powerful method to obtain predictions about cosmological observables on linear scales. However, mildly non-linear scales need to be consistently considered when testing gravity theories because a large part of the data comes from those scales. Thus, non-linear corrections to predictions on observables coming from the linear analysis can help in discriminating among different gravity theories. We proceed firstly by identifying the necessary operators which need to be included in the effective field theory Lagrangian in order to go beyond the linear order in perturbations and then we construct the corresponding non-linear action. Moreover, we present the complete recipe to map any single field dark energy and modified gravity models into the non-linear effective field theory framework by considering a general action in the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we proceed to map the beyond-Horndeski theory and low-energy Hořava gravity into the effective field theory formalism. As a final step we derived the 4th order action in term of the curvature perturbation. This allowed us to identify the non-linear contributions coming from the linear order perturbations which at the next order act like source terms. Moreover, we confirm that the stability requirements, ensuring the positivity of the kinetic term and the speed of propagation for scalar mode, are automatically satisfied once the viability of the theory is demanded at linear level. The approach we present here will allow to construct, in a model independent way, all the relevant predictions on observables at mildly non-linear scales.

  13. First study of the evolution of the SeDeM expert system parameters based on percolation theory: Monitoring of their critical behavior.

    PubMed

    Galdón, Eduardo; Casas, Marta; Gayango, Manuel; Caraballo, Isidoro

    2016-12-01

    The deep understanding of products and processes has become a requirement for pharmaceutical industries to follow the Quality by Design principles promoted by the regulatory authorities. With this aim, SeDeM expert system was developed as a useful preformulation tool to predict the likelihood to process drugs and excipients through direct compression. SeDeM system is a step forward in the rational development of a formulation, allowing the normalisation of the rheological parameters and the identification of the weaknesses and strengths of a powder or a powder blend. However, this method is based on the assumption of a linear behavior of disordered systems. As percolation theory has demonstrated, powder blends behave as non-linear systems that can suffer abrupt changes in their properties near to geometrical phase transitions of the components. The aim of this paper was to analyze for the first time the evolution of the SeDeM parameters in drug/excipient powder blends from the point of view of the percolation theory and to compare the changes predicted by SeDeM with the predictions of Percolation theory. For this purpose, powder blends of lactose and theophylline with varying concentrations of the model drug have been prepared and the SeDeM analysis has been applied to each blend in order to monitor the evolution of their properties. On the other hand, percolation thresholds have been estimated for these powder blends where critical points have been found for important rheological parameters as the powder flow. Finally, the predictions of percolation theory and SeDeM have been compared concluding that percolation theory can complement the SeDeM method for a more accurate estimation of the Design Space. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Computation of output feedback gains for linear stochastic systems using the Zangnill-Powell Method

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1975-01-01

    Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.

  15. On use of image quality metrics for perceptual blur modeling: image/video compression case

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  16. Models for the propensity score that contemplate the positivity assumption and their application to missing data and causality.

    PubMed

    Molina, J; Sued, M; Valdora, M

    2018-06-05

    Generalized linear models are often assumed to fit propensity scores, which are used to compute inverse probability weighted (IPW) estimators. To derive the asymptotic properties of IPW estimators, the propensity score is supposed to be bounded away from zero. This condition is known in the literature as strict positivity (or positivity assumption), and, in practice, when it does not hold, IPW estimators are very unstable and have a large variability. Although strict positivity is often assumed, it is not upheld when some of the covariates are unbounded. In real data sets, a data-generating process that violates the positivity assumption may lead to wrong inference because of the inaccuracy in the estimations. In this work, we attempt to conciliate between the strict positivity condition and the theory of generalized linear models by incorporating an extra parameter, which results in an explicit lower bound for the propensity score. An additional parameter is added to fulfil the overlap assumption in the causal framework. Copyright © 2018 John Wiley & Sons, Ltd.

  17. State and actuator fault estimation observer design integrated in a riderless bicycle stabilization system.

    PubMed

    Brizuela Mendoza, Jorge Aurelio; Astorga Zaragoza, Carlos Manuel; Zavala Río, Arturo; Pattalochi, Leo; Canales Abarca, Francisco

    2016-03-01

    This paper deals with an observer design for Linear Parameter Varying (LPV) systems with high-order time-varying parameter dependency. The proposed design, considered as the main contribution of this paper, corresponds to an observer for the estimation of the actuator fault and the system state, considering measurement noise at the system outputs. The observer gains are computed by considering the extension of linear systems theory to polynomial LPV systems, in such a way that the observer reaches the characteristics of LPV systems. As a result, the actuator fault estimation is ready to be used in a Fault Tolerant Control scheme, where the estimated state with reduced noise should be used to generate the control law. The effectiveness of the proposed methodology has been tested using a riderless bicycle model with dependency on the translational velocity v, where the control objective corresponds to the system stabilization towards the upright position despite the variation of v along the closed-loop system trajectories. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. From neurons to circuits: linear estimation of local field potentials.

    PubMed

    Rasch, Malte; Logothetis, Nikos K; Kreiman, Gabriel

    2009-11-04

    Extracellular physiological recordings are typically separated into two frequency bands: local field potentials (LFPs) (a circuit property) and spiking multiunit activity (MUA). Recently, there has been increased interest in LFPs because of their correlation with functional magnetic resonance imaging blood oxygenation level-dependent measurements and the possibility of studying local processing and neuronal synchrony. To further understand the biophysical origin of LFPs, we asked whether it is possible to estimate their time course based on the spiking activity from the same electrode or nearby electrodes. We used "signal estimation theory" to show that a linear filter operation on the activity of one or a few neurons can explain a significant fraction of the LFP time course in the macaque monkey primary visual cortex. The linear filter used to estimate the LFPs had a stereotypical shape characterized by a sharp downstroke at negative time lags and a slower positive upstroke for positive time lags. The filter was similar across different neocortical regions and behavioral conditions, including spontaneous activity and visual stimulation. The estimations had a spatial resolution of approximately 1 mm and a temporal resolution of approximately 200 ms. By considering a causal filter, we observed a temporal asymmetry such that the positive time lags in the filter contributed more to the LFP estimation than the negative time lags. Additionally, we showed that spikes occurring within approximately 10 ms of spikes from nearby neurons yielded better estimation accuracies than nonsynchronous spikes. In summary, our results suggest that at least some circuit-level local properties of the field potentials can be predicted from the activity of one or a few neurons.

  19. From neurons to circuits: linear estimation of local field potentials

    PubMed Central

    Rasch, Malte; Logthetis, Nikos K.; Kreiman, Gabriel

    2010-01-01

    Extracellular physiological recordings are typically separated into two frequency bands: local field potentials (LFPs, a circuit property) and spiking multi-unit activity (MUA). There has been increased interest in LFPs due to their correlation with fMRI measurements and the possibility of studying local processing and neuronal synchrony. To further understand the biophysical origin of LFPs, we asked whether it is possible to estimate their time course based on the spiking activity from the same or nearby electrodes. We used Signal Estimation Theory to show that a linear filter operation on the activity of one/few neurons can explain a significant fraction of the LFP time course in the macaque primary visual cortex. The linear filter used to estimate the LFPs had a stereotypical shape characterized by a sharp downstroke at negative time lags and a slower positive upstroke for positve time lags. The filter was similar across neocortical regions and behavioral conditions including spontaneous activity and visual stimulation. The estimations had a spatial resolution of ~1 mm and a temporal resolution of ~200 ms. By considering a causal filter, we observed a temporal asymmetry such that the positive time lags in the filter contributed more to the LFP estimation than negative time lags. Additionally, we showed that spikes occurring within ~10 ms of spikes from nearby neurons yielded better estimation accuracies than nonsynchronous spikes. In sum, our results suggest that at least some circuit-level local properties of the field potentials can be predicted from the activity of one or a few neurons. PMID:19889990

  20. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  1. Improvements in aircraft extraction programs

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.; Maine, R. E.

    1976-01-01

    Flight data from an F-8 Corsair and a Cessna 172 was analyzed to demonstrate specific improvements in the LRC parameter extraction computer program. The Cramer-Rao bounds were shown to provide a satisfactory relative measure of goodness of parameter estimates. It was not used as an absolute measure due to an inherent uncertainty within a multiplicative factor, traced in turn to the uncertainty in the noise bandwidth in the statistical theory of parameter estimation. The measure was also derived on an entirely nonstatistical basis, yielding thereby also an interpretation of the significance of off-diagonal terms in the dispersion matrix. The distinction between coefficients as linear and non-linear was shown to be important in its implication to a recommended order of parameter iteration. Techniques of improving convergence generally, were developed, and tested out on flight data. In particular, an easily implemented modification incorporating a gradient search was shown to improve initial estimates and thus remove a common cause for lack of convergence.

  2. Modern control concepts in hydrology

    NASA Technical Reports Server (NTRS)

    Duong, N.; Johnson, G. R.; Winn, C. B.

    1974-01-01

    Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  3. Analysis of high aspect ratio jet flap wings of arbitrary geometry.

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    Paper presents a design technique for rapidly computing lift, induced drag, and spanwise loading of unswept jet flap wings of arbitrary thickness, chord, twist, blowing, and jet angle, including discontinuities. Linear theory is used, extending Spence's method for elliptically loaded jet flap wings. Curves for uniformly blown rectangular wings are presented for direct performance estimation. Arbitrary planforms require a simple computer program. Method of reducing wing to equivalent stretched, twisted, unblown planform for hand calculation is also given. Results correlate with limited existing data, and show lifting line theory is reasonable down to aspect ratios of 5.

  4. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    NASA Astrophysics Data System (ADS)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  5. First-principles simulations of doping-dependent mesoscale screening of adatoms in graphene

    NASA Astrophysics Data System (ADS)

    Mostofi, Arash; Corsetti, Fabiano; Wong, Dillon; Crommie, Michael; Lischner, Johannes

    Adsorbed atoms and molecules play an important role in controlling and tuning the functional properties of 2D materials. Understanding and predicting this phenomenon from theory is challenging because of the need to capture both the local chemistry of the adsorbate-substrate interaction and its complex interplay with the long-range screening response of the substrate. To address this challenge, we have developed a first-principles multi-scale approach that combines linear-scaling density-functional theory, continuum screening theory and large-scale tight-binding simulations. Focussing on the case of a calcium adatom on graphene, we draw comparison between the effect of (i) non-linearity, (ii) intraband and interband transitions, and (iii) the exchange-correlation potential, thus providing insight into the relative importance of these different factors on the screening response. We also determine the charge transfer from the adatom to the graphene substrate (the key parameter used in continuum screening models), showing it to be significantly larger than previous estimates. AM and FC acknowledge support of the EPSRC under Grant EP/J015059/1, and JL under Grant EP/N005244/1.

  6. Decision analysis with cumulative prospect theory.

    PubMed

    Bayoumi, A M; Redelmeier, D A

    2000-01-01

    Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.

  7. Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.

    PubMed

    Jiang, Yuan; He, Yunxiao; Zhang, Heping

    LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.

  8. Can a minimalist model of wind forced baroclinic Rossby waves produce reasonable results?

    NASA Astrophysics Data System (ADS)

    Watanabe, Wandrey B.; Polito, Paulo S.; da Silveira, Ilson C. A.

    2016-04-01

    The linear theory predicts that Rossby waves are the large scale mechanism of adjustment to perturbations of the geophysical fluid. Satellite measurements of sea level anomaly (SLA) provided sturdy evidence of the existence of these waves. Recent studies suggest that the variability in the altimeter records is mostly due to mesoscale nonlinear eddies and challenges the original interpretation of westward propagating features as Rossby waves. The objective of this work is to test whether a classic linear dynamic model is a reasonable explanation for the observed SLA. A linear-reduced gravity non-dispersive Rossby wave model is used to estimate the SLA forced by direct and remote wind stress. Correlations between model results and observations are up to 0.88. The best agreement is in the tropical region of all ocean basins. These correlations decrease towards insignificance in mid-latitudes. The relative contributions of eastern boundary (remote) forcing and local wind forcing in the generation of Rossby waves are also estimated and suggest that the main wave forming mechanism is the remote forcing. Results suggest that linear long baroclinic Rossby wave dynamics explain a significant part of the SLA annual variability at least in the tropical oceans.

  9. Control of discrete time systems based on recurrent Super-Twisting-like algorithm.

    PubMed

    Salgado, I; Kamal, S; Bandyopadhyay, B; Chairez, I; Fridman, L

    2016-09-01

    Most of the research in sliding mode theory has been carried out to in continuous time to solve the estimation and control problems. However, in discrete time, the results in high order sliding modes have been less developed. In this paper, a discrete time super-twisting-like algorithm (DSTA) was proposed to solve the problems of control and state estimation. The stability proof was developed in terms of the discrete time Lyapunov approach and the linear matrix inequalities theory. The system trajectories were ultimately bounded inside a small region dependent on the sampling period. Simulation results tested the DSTA. The DSTA was applied as a controller for a Furuta pendulum and for a DC motor supplied by a DSTA signal differentiator. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Cosmic velocity-gravity relation in redshift space

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Chodorowski, Michał J.; Teyssier, Romain

    2007-02-01

    We propose a simple way to estimate the parameter β ~= Ω0.6/b from 3D galaxy surveys, where Ω is the non-relativistic matter-density parameter of the Universe and b is the bias between the galaxy distribution and the total matter distribution. Our method consists in measuring the relation between the cosmological velocity and gravity fields, and thus requires peculiar velocity measurements. The relation is measured directly in redshift space, so there is no need to reconstruct the density field in real space. In linear theory, the radial components of the gravity and velocity fields in redshift space are expected to be tightly correlated, with a slope given, in the distant observer approximation, by We test extensively this relation using controlled numerical experiments based on a cosmological N-body simulation. To perform the measurements, we propose a new and rather simple adaptive interpolation scheme to estimate the velocity and the gravity field on a grid. One of the most striking results is that non-linear effects, including `fingers of God', affect mainly the tails of the joint probability distribution function (PDF) of the velocity and gravity field: the 1-1.5 σ region around the maximum of the PDF is dominated by the linear theory regime, both in real and redshift space. This is understood explicitly by using the spherical collapse model as a proxy of non-linear dynamics. Applications of the method to real galaxy catalogues are discussed, including a preliminary investigation on homogeneous (volume-limited) `galaxy' samples extracted from the simulation with simple prescriptions based on halo and substructure identification, to quantify the effects of the bias between the galaxy distribution and the total matter distribution, as well as the effects of shot noise.

  11. Accurate frequency domain measurement of the best linear time-invariant approximation of linear time-periodic systems including the quantification of the time-periodic distortions

    NASA Astrophysics Data System (ADS)

    Louarroudi, E.; Pintelon, R.; Lataire, J.

    2014-10-01

    Time-periodic (TP) phenomena occurring, for instance, in wind turbines, helicopters, anisotropic shaft-bearing systems, and cardiovascular/respiratory systems, are often not addressed when classical frequency response function (FRF) measurements are performed. As the traditional FRF concept is based on the linear time-invariant (LTI) system theory, it is only approximately valid for systems with varying dynamics. Accordingly, the quantification of any deviation from this ideal LTI framework is more than welcome. The “measure of deviation” allows us to define the notion of the best LTI (BLTI) approximation, which yields the best - in mean square sense - LTI description of a linear time-periodic LTP system. By taking into consideration the TP effects, it is shown in this paper that the variability of the BLTI measurement can be reduced significantly compared with that of classical FRF estimators. From a single experiment, the proposed identification methods can handle (non-)linear time-periodic [(N)LTP] systems in open-loop with a quantification of (i) the noise and/or the NL distortions, (ii) the TP distortions and (iii) the transient (leakage) errors. Besides, a geometrical interpretation of the BLTI approximation is provided, leading to a framework called vector FRF analysis. The theory presented is supported by numerical simulations as well as real measurements mimicking the well-known mechanical Mathieu oscillator.

  12. A simple approach to nonlinear estimation of physical systems

    USGS Publications Warehouse

    Christakos, G.

    1988-01-01

    Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.

  13. Assessing the performance of dynamical trajectory estimates

    NASA Astrophysics Data System (ADS)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  14. Assessing the performance of dynamical trajectory estimates.

    PubMed

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  15. Non-linear interaction of a detonation/vorticity wave

    NASA Technical Reports Server (NTRS)

    Lasseigne, D. G.; Jackson, T. L.; Hussaini, M. Y.

    1991-01-01

    The interaction of an oblique, overdriven detonation wave with a vorticity disturbance is investigated by a direct two-dimensional numerical simulation using a multi-domain, finite-difference solution of the compressible Euler equations. The results are compared to those of linear theory, which predict that the effect of exothermicity on the interaction is relatively small except possibly near a critical angle where linear theory no longer holds. It is found that the steady-state computational results agree with the results of linear theory. However, for cases with incident angle near the critical angle, moderate disturbance amplitudes, and/or sudden transient encounter with a disturbance, the effects of exothermicity are more pronounced than predicted by linear theory. Finally, it is found that linear theory correctly determines the critical angle.

  16. Estimating the effect of treatment rate changes when treatment benefits are heterogeneous: antibiotics and otitis media.

    PubMed

    Park, Tae-Ryong; Brooks, John M; Chrischilles, Elizabeth A; Bergus, George

    2008-01-01

    Contrast methods to assess the health effects of a treatment rate change when treatment benefits are heterogeneous across patients. Antibiotic prescribing for children with otitis media (OM) in Iowa Medicaid is the empirical example. Instrumental variable (IV) and linear probability model (LPM) are used to estimate the effect of antibiotic treatments on cure probabilities for children with OM in Iowa Medicaid. Local area physician supply per capita is the instrument in the IV models. Estimates are contrasted in terms of their ability to make inferences for patients whose treatment choices may be affected by a change in population treatment rates. The instrument was positively related to the probability of being prescribed an antibiotic. LPM estimates showed a positive effect of antibiotics on OM patient cure probability while IV estimates showed no relationship between antibiotics and patient cure probability. Linear probability model estimation yields the average effects of the treatment on patients that were treated. IV estimation yields the average effects for patients whose treatment choices were affected by the instrument. As antibiotic treatment effects are heterogeneous across OM patients, our estimates from these approaches are aligned with clinical evidence and theory. The average estimate for treated patients (higher severity) from the LPM model is greater than estimates for patients whose treatment choices are affected by the instrument (lower severity) from the IV models. Based on our IV estimates it appears that lowering antibiotic use in OM patients in Iowa Medicaid did not result in lost cures.

  17. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Modern digital flight control system design for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.; Berry, P. W.; Stengel, R. F.

    1979-01-01

    Methods for and results from the design and evaluation of a digital flight control system (DFCS) for a CH-47B helicopter are presented. The DFCS employed proportional-integral control logic to provide rapid, precise response to automatic or manual guidance commands while following conventional or spiral-descent approach paths. It contained altitude- and velocity-command modes, and it adapted to varying flight conditions through gain scheduling. Extensive use was made of linear systems analysis techniques. The DFCS was designed, using linear-optimal estimation and control theory, and the effects of gain scheduling are assessed by examination of closed-loop eigenvalues and time responses.

  19. Linear and Non-Linear Dielectric Response of Periodic Systems from Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Umari, Paolo

    2006-03-01

    We present a novel approach that allows to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wavefunction, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence. The polarization is sampled through forward-walking. This approach has been validated for the case of the polarizability of an isolated hydrogen atom, and then applied to a periodic system. We then calculate the linear susceptibility and second-order hyper-susceptibility of molecular-hydrogen chains whith different bond-length alternations, and assess the quality of nodal surfaces derived from density-functional theory or from Hartree-Fock. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.P. Umari, A.J. Williamson, G. Galli, and N. MarzariPhys. Rev. Lett. 95, 207602 (2005).

  20. Techniques for the estimation of leaf area index using spectral data

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Shen, S. S.

    1984-01-01

    Based on the radiative transport theory of a homogeneous canopy, a new approach for obtaining transformations of spectral data used to estimate leaf area index (LAI), is developed. The transformations which are obtained without any ground knowledge of LAI show low sensitivity to soil variability, and are linearly related to LAI with relationships which are predictable from leaf reflectance, transmittance properties, and canopy reflectance models. Evaluation of the SAIL (scattering by arbitrarily inclined leaves) model is considered. Using only nadir view data, results obtained on winter and spring wheat and corn crops are presented.

  1. Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems

    NASA Astrophysics Data System (ADS)

    Cianchi, Andrea; Maz'ya, Vladimir G.

    2018-05-01

    Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.

  2. Quantum Hamiltonian identification from measurement time traces.

    PubMed

    Zhang, Jun; Sarovar, Mohan

    2014-08-22

    Precise identification of parameters governing quantum processes is a critical task for quantum information and communication technologies. In this Letter, we consider a setting where system evolution is determined by a parametrized Hamiltonian, and the task is to estimate these parameters from temporal records of a restricted set of system observables (time traces). Based on the notion of system realization from linear systems theory, we develop a constructive algorithm that provides estimates of the unknown parameters directly from these time traces. We illustrate the algorithm and its robustness to measurement noise by applying it to a one-dimensional spin chain model with variable couplings.

  3. Refinement of Timoshenko Beam Theory for Composite and Sandwich Beams Using Zigzag Kinematics

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco

    2007-01-01

    A new refined theory for laminated-composite and sandwich beams that contains the kinematics of the Timoshenko Beam Theory as a proper baseline subset is presented. This variationally consistent theory is derived from the virtual work principle and employs a novel piecewise linear zigzag function that provides a more realistic representation of the deformation states of transverse shear flexible beams than other similar theories. This new zigzag function is unique in that it vanishes at the top and bottom bounding surfaces of a beam. The formulation does not enforce continuity of the transverse shear stress across the beam s cross-section, yet is robust. Two major shortcomings that are inherent in the previous zigzag theories, shear-force inconsistency and difficulties in simulating clamped boundary conditions, and that have greatly limited the utility of these previous theories are discussed in detail. An approach that has successfully resolved these shortcomings is presented herein. This new theory can be readily extended to plate and shell structures, and should be useful for obtaining accurate estimates of structural response of laminated composites.

  4. Quantum corrections to the generalized Proca theory via a matter field

    NASA Astrophysics Data System (ADS)

    Amado, André; Haghani, Zahra; Mohammadi, Azadeh; Shahidi, Shahab

    2017-09-01

    We study the quantum corrections to the generalized Proca theory via matter loops. We consider two types of interactions, linear and nonlinear in the vector field. Calculating the one-loop correction to the vector field propagator, three- and four-point functions, we show that the non-linear interactions are harmless, although they renormalize the theory. The linear matter-vector field interactions introduce ghost degrees of freedom to the generalized Proca theory. Treating the theory as an effective theory, we calculate the energy scale up to which the theory remains healthy.

  5. Propulsion of a fin whale (Balaenoptera physalus): why the fin whale is a fast swimmer.

    PubMed

    Bose, N; Lien, J

    1989-07-22

    Measurements of an immature fin whale (Balaenoptera physalus), which died as a result of entrapment in fishing gear near Frenchmans Cove, Newfoundland (47 degrees 9' N, 55 degrees 25' W), were made to obtain estimates of volume and surface area of the animal. Detailed measurements of the flukes, both planform and sections, were also obtained. A strip theory was developed to calculate the hydrodynamic performance of the whale's flukes as an oscillating propeller. This method is based on linear, two-dimensional, small-amplitude, unsteady hydrofoil theory with correction factors used to account for the effects of finite span and finite amplitude motion. These correction factors were developed from theoretical results of large-amplitude heaving motion and unsteady lifting-surface theory. A model that makes an estimate of the effects of viscous flow on propeller performance was superimposed on the potential-flow results. This model estimates the drag of the hydrofoil sections by assuming that the drag is similar to that of a hydrofoil section in steady flow. The performance characteristics of the flukes of the fin whale were estimated by using this method. The effects of the different correction factors, and of the frictional drag of the fluke sections, are emphasized. Frictional effects in particular were found to reduce the hydrodynamic efficiency of the flukes significantly. The results are discussed and compared with the known characteristics of fin-whale swimming.

  6. Constitutive Modeling, Nonlinear Behavior, and the Stress-Optic Law

    DTIC Science & Technology

    2011-01-01

    estimates of D̂ from dynamic mechanical measurements. Some results are shown in Figure 58 for a filled EPDM rubber [116]. There is rough agreement with...elastomers and filler-reinforced rubber . 5.1 Linearity and the superposition principle The problem of analyzing viscoelastic mechanical behavior is greatly...deformation such as shear. For crosslinked rubber the strain can be defined in terms of the strain function suggested by the statistical theories of

  7. Breakthroughs in Low-Profile Leaky-Wave HPM Antennas

    DTIC Science & Technology

    2016-09-21

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and... traveling , fast-wave, leaky-wave class. 1.1. Overview of Previous Activities (1st thru 11th Quarter) During the first quarter, we prepared and...theory to guide the design of high-gain configurations (again, limited to 2D, H-plane representations) for linear, forward traveling -wave, leaky

  8. Saturation and energy-conversion efficiency of auroral kilometric radiation

    NASA Technical Reports Server (NTRS)

    Wu, C. S.; Tsai, S. T.; Xu, M. J.; Shen, J. W.

    1981-01-01

    A quasi-linear theory is used to study the saturation level of the auroral kilometric radiation. The investigation is based on the assumption that the emission is due to a cyclotron maser instability as suggested by Wu and Lee and Lee et al. The thermodynamic bound on the radiation energy is also estimated separately. The energy-conversion efficiency of the radiation process is discussed. The results are consistent with observations.

  9. A Mathematical Theory of Command and Control Structures.

    DTIC Science & Technology

    1984-08-30

    minimize the functional J over the space of all linear maps,then (a) 6J ( HKL l 0)n 0 0 6JKi+Li (HK’L K i +Li) =0 .61H (H,K,L, )= 0 13 for all i=l,...N, and...Castanon, G. C. Verghese, A. S. Willsky, "A Scaterring Framework for Decentralized Estimation Problems," MIT/LIDS paper 1075 , March 1981, Submitted to

  10. A Resume of Stochastic, Time-Varying, Linear System Theory with Application to Active-Sonar Signal-Processing Problems

    DTIC Science & Technology

    1981-06-15

    relationships 5 3. Normalized energy in ambiguity function for i = 0 14 k ilI SACLANTCEN SR-50 A RESUME OF STOCHASTIC, TIME-VARYING, LINEAR SYSTEM THEORY WITH...the order in which systems are concatenated is unimportant. These results are exactly analogous to the results of time-invariant linear system theory in...REFERENCES 1. MEIER, L. A rdsum6 of deterministic time-varying linear system theory with application to active sonar signal processing problems, SACLANTCEN

  11. Analytic-continuation approach to the resummation of divergent series in Rayleigh-Schrödinger perturbation theory

    NASA Astrophysics Data System (ADS)

    Mihálka, Zsuzsanna É.; Surján, Péter R.

    2017-12-01

    The method of analytic continuation is applied to estimate eigenvalues of linear operators from finite order results of perturbation theory even in cases when the latter is divergent. Given a finite number of terms E(k ),k =1 ,2 ,⋯M resulting from a Rayleigh-Schrödinger perturbation calculation, scaling these numbers by μk (μ being the perturbation parameter) we form the sum E (μ ) =∑kμkE(k ) for small μ values for which the finite series is convergent to a certain numerical accuracy. Extrapolating the function E (μ ) to μ =1 yields an estimation of the exact solution of the problem. For divergent series, this procedure may serve as resummation tool provided the perturbation problem has a nonzero radius of convergence. As illustrations, we treat the anharmonic (quartic) oscillator and an example from the many-electron correlation problem.

  12. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  13. Quadratic semiparametric Von Mises calculus

    PubMed Central

    Robins, James; Li, Lingling; Tchetgen, Eric

    2009-01-01

    We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487

  14. A nonlinear Kalman filtering approach to embedded control of turbocharged diesel engines

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan

    2014-10-01

    The development of efficient embedded control for turbocharged Diesel engines, requires the programming of elaborated nonlinear control and filtering methods. To this end, in this paper nonlinear control for turbocharged Diesel engines is developed with the use of Differential flatness theory and the Derivative-free nonlinear Kalman Filter. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances the Derivative-free nonlinear Kalman Filter is used and redesigned as a disturbance observer. The filter consists of the Kalman Filter recursion on the linearized equivalent of the Diesel engine model and of an inverse transformation based on differential flatness theory which enables to obtain estimates for the state variables of the initial nonlinear model. Once the disturbances variables are identified it is possible to compensate them by including an additional control term in the feedback loop. The efficiency of the proposed control method is tested through simulation experiments.

  15. A parametric model for the changes in the complex valued conductivity of a lung during tidal breathing

    NASA Astrophysics Data System (ADS)

    Nordebo, Sven; Dalarsson, Mariana; Khodadad, Davood; Müller, Beat; Waldmann, Andreas D.; Becher, Tobias; Frerichs, Inez; Sophocleous, Louiza; Sjöberg, Daniel; Seifnaraghi, Nima; Bayford, Richard

    2018-05-01

    Classical homogenization theory based on the Hashin–Shtrikman coated ellipsoids is used to model the changes in the complex valued conductivity (or admittivity) of a lung during tidal breathing. Here, the lung is modeled as a two-phase composite material where the alveolar air-filling corresponds to the inclusion phase. The theory predicts a linear relationship between the real and the imaginary parts of the change in the complex valued conductivity of a lung during tidal breathing, and where the loss cotangent of the change is approximately the same as of the effective background conductivity and hence easy to estimate. The theory is illustrated with numerical examples based on realistic parameter values and frequency ranges used with electrical impedance tomography (EIT). The theory may be potentially useful for imaging and clinical evaluations in connection with lung EIT for respiratory management and control.

  16. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  17. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  18. Experiments on stress dependent borehole acoustic waves.

    PubMed

    Hsu, Chaur-Jian; Kane, Michael R; Winkler, Kenneth; Wang, Canyun; Johnson, David Linton

    2011-10-01

    In the laboratory setup, a borehole traverses a dry sandstone formation, which is subjected to a controlled uniaxial stress in the direction perpendicular to the borehole axis. Measurements are made in a single loading-unloading stress cycle from zero to 10 MPa and then back down to zero stress. The applied stress and the presence of the borehole induce anisotropy in the bulk of the material and stress concentration around the borehole, both azimuthally and radially. Acoustic waves are generated and detected in the water-filled borehole, including compressional and shear headwaves, as well as modes of monopole, dipole, quadrupole, and higher order azimuthal symmetries. The linear and non-linear elastic parameters of the formation material are independently quantified, and utilized in conjunction with elastic theories to predict the characteristics of various borehole waves at zero and finite stress conditions. For example, an analytic theory is developed which is successfully used to estimate the changes of monopole tube mode at low frequency resulted from uniaxial stress, utilizing the measured material third order elasticity parameters. Comparisons between various measurements as well as that between experiments and theories are also presented. © 2011 Acoustical Society of America

  19. Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin

    NASA Astrophysics Data System (ADS)

    Otiefy, R. A. H.; Negm, H. M.

    2010-12-01

    The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.

  20. The design and analysis of simple low speed flap systems with the aid of linearized theory computer programs

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.

    1985-01-01

    The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.

  1. Energy harvesting with stacked dielectric elastomer transducers: Nonlinear theory, optimization, and linearized scaling law

    NASA Astrophysics Data System (ADS)

    Tutcuoglu, A.; Majidi, C.

    2014-12-01

    Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.

  2. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models

    PubMed Central

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005

  3. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    PubMed

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  4. Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles

    PubMed Central

    Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631

  5. Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.

  6. A theory of fine structure image models with an application to detection and classification of dementia.

    PubMed

    O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin

    2015-06-01

    Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.

  7. Gaussian Curvature as an Identifier of Shell Rigidity

    NASA Astrophysics Data System (ADS)

    Harutyunyan, Davit

    2017-11-01

    In the paper we deal with shells with non-zero Gaussian curvature. We derive sharp Korn's first (linear geometric rigidity estimate) and second inequalities on that kind of shell for zero or periodic Dirichlet, Neumann, and Robin type boundary conditions. We prove that if the Gaussian curvature is positive, then the optimal constant in the first Korn inequality scales like h, and if the Gaussian curvature is negative, then the Korn constant scales like h 4/3, where h is the thickness of the shell. These results have a classical flavour in continuum mechanics, in particular shell theory. The Korn first inequalities are the linear version of the famous geometric rigidity estimate by Friesecke et al. for plates in Arch Ration Mech Anal 180(2):183-236, 2006 (where they show that the Korn constant in the nonlinear Korn's first inequality scales like h 2), extended to shells with nonzero curvature. We also recover the uniform Korn-Poincaré inequality proven for "boundary-less" shells by Lewicka and Müller in Annales de l'Institute Henri Poincare (C) Non Linear Anal 28(3):443-469, 2011 in the setting of our problem. The new estimates can also be applied to find the scaling law for the critical buckling load of the shell under in-plane loads as well as to derive energy scaling laws in the pre-buckled regime. The exponents 1 and 4/3 in the present work appear for the first time in any sharp geometric rigidity estimate.

  8. Quantum Chemically Estimated Abraham Solute Parameters Using Multiple Solvent-Water Partition Coefficients and Molecular Polarizability.

    PubMed

    Liang, Yuzhen; Xiong, Ruichang; Sandler, Stanley I; Di Toro, Dominic M

    2017-09-05

    Polyparameter Linear Free Energy Relationships (pp-LFERs), also called Linear Solvation Energy Relationships (LSERs), are used to predict many environmentally significant properties of chemicals. A method is presented for computing the necessary chemical parameters, the Abraham parameters (AP), used by many pp-LFERs. It employs quantum chemical calculations and uses only the chemical's molecular structure. The method computes the Abraham E parameter using density functional theory computed molecular polarizability and the Clausius-Mossotti equation relating the index refraction to the molecular polarizability, estimates the Abraham V as the COSMO calculated molecular volume, and computes the remaining AP S, A, and B jointly with a multiple linear regression using sixty-five solvent-water partition coefficients computed using the quantum mechanical COSMO-SAC solvation model. These solute parameters, referred to as Quantum Chemically estimated Abraham Parameters (QCAP), are further adjusted by fitting to experimentally based APs using QCAP parameters as the independent variables so that they are compatible with existing Abraham pp-LFERs. QCAP and adjusted QCAP for 1827 neutral chemicals are included. For 24 solvent-water systems including octanol-water, predicted log solvent-water partition coefficients using adjusted QCAP have the smallest root-mean-square errors (RMSEs, 0.314-0.602) compared to predictions made using APs estimated using the molecular fragment based method ABSOLV (0.45-0.716). For munition and munition-like compounds, adjusted QCAP has much lower RMSE (0.860) than does ABSOLV (4.45) which essentially fails for these compounds.

  9. The response of multidegree-of-freedom systems with quadratic non-linearities to a harmonic parametric resonance

    NASA Astrophysics Data System (ADS)

    Nayfeh, A. H.

    1983-09-01

    An analysis is presented of the response of multidegree-of-freedom systems with quadratic non-linearities to a harmonic parametric excitation in the presence of an internal resonance of the combination type ω3 ≈ ω2 + ω1, where the ωn are the linear natural frequencies of the systems. In the case of a fundamental resonance of the third mode (i.e., Ω ≈ω 3, where Ω is the frequency of the excitation), one can identify two critical values ζ 1 and ζ 2, where ζ 2 ⩾ ζ 1, of the amplitude F of the excitation. The value F = ζ2 corresponds to the transition from stable to unstable solutions. When F < ζ1, the motion decays to zero according to both linear and non-linear theories. When F > ζ2, the motion grows exponentially with time according to the linear theory but the non-linearity limits the motion to a finite amplitude steady state. The amplitude of the third mode, which is directly excited, is independent of F, whereas the amplitudes of the first and second modes, which are indirectly excited through the internal resonance, are functions of F. When ζ1 ⩽ F ⩽ ζ2, the motion decays or achieves a finite amplitude steady state depending on the initial conditions according to the non-linear theory, whereas it decays to zero according to the linear theory. This is an example of subcritical instability. In the case of a fundamental resonance of either the first or second mode, the trivial response is the only possible steady state. When F ⩽ ζ2, the motion decays to zero according to both linear and non-linear theories. When F > ζ2, the motion grows exponentially with time according to the linear theory but it is aperiodic according to the non-linear theory. Experiments are being planned to check these theoretical results.

  10. Tidal Channel Dynamics and Muddy Substrates: A Comparison between a Wave Dominated and a Tidal Dominated System

    DTIC Science & Technology

    2012-09-30

    standard linear wave theory. Suspended sediment concentration (SSC) was estimated using the backscatter signal of the ADCP and the turbidity value...measured by the OBS when present. The OBS turbidity signal was calibrated against SSC measured in a laboratory tank, using sediments collected on the...link the geotechnical properties of sediment substrates to the spatial and hydrodynamic characteristics of tidal channels • To develop new

  11. Electronic Structure Methods Based on Density Functional Theory

    DTIC Science & Technology

    2010-01-01

    0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...chapter in the ASM Handbook , Volume 22A: Fundamentals of Modeling for Metals Processing, 2010. PAO Case Number: 88ABW-2009-3258; Clearance Date: 16 Jul...are represented using a linear combination, or basis, of plane waves. Over time several methods were developed to avoid the large number of planewaves

  12. Design of Supersonic Transport Flap Systems for Thrust Recovery at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Mann, Michael J.; Carlson, Harry W.; Domack, Christopher S.

    1999-01-01

    A study of the subsonic aerodynamics of hinged flap systems for supersonic cruise commercial aircraft has been conducted using linear attached-flow theory that has been modified to include an estimate of attainable leading edge thrust and an approximate representation of vortex forces. Comparisons of theoretical predictions with experimental results show that the theory gives a reasonably good and generally conservative estimate of the performance of an efficient flap system and provides a good estimate of the leading and trailing-edge deflection angles necessary for optimum performance. A substantial reduction in the area of the inboard region of the leading edge flap has only a minor effect on the performance and the optimum deflection angles. Changes in the size of the outboard leading-edge flap show that performance is greatest when this flap has a chord equal to approximately 30 percent of the wing chord. A study was also made of the performance of various combinations of individual leading and trailing-edge flaps, and the results show that aerodynamic efficiencies as high as 85 percent of full suction are predicted.

  13. Vehicle dynamics control of four in-wheel motor drive electric vehicle using gain scheduling based on tyre cornering stiffness estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Lu; Yu, Zhuoping; Wang, Yang; Yang, Chen; Meng, Yufeng

    2012-06-01

    This paper focuses on the vehicle dynamic control system for a four in-wheel motor drive electric vehicle, aiming at improving vehicle stability under critical driving conditions. The vehicle dynamics controller is composed of three modules, i.e. motion following control, control allocation and vehicle state estimation. Considering the strong nonlinearity of the tyres under critical driving conditions, the yaw motion of the vehicle is regulated by gain scheduling control based on the linear quadratic regulator theory. The feed-forward and feedback gains of the controller are updated in real-time by online estimation of the tyre cornering stiffness, so as to ensure the control robustness against environmental disturbances as well as parameter uncertainty. The control allocation module allocates the calculated generalised force requirements to each in-wheel motor based on quadratic programming theory while taking the tyre longitudinal/lateral force coupling characteristic into consideration. Simulations under a variety of driving conditions are carried out to verify the control algorithm. Simulation results indicate that the proposed vehicle stability controller can effectively stabilise the vehicle motion under critical driving conditions.

  14. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  15. A neuro approach to solve fuzzy Riccati differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Kumaresan, N.; Kamali, M. Z. M.; Ratnavelu, Kurunathan

    2015-10-01

    There are many applications of optimal control theory especially in the area of control systems in engineering. In this paper, fuzzy quadratic Riccati differential equation is estimated using neural networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). The solution can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that NN approach shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over RK4.

  16. Superlinear convergence estimates for a conjugate gradient method for the biharmonic equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, R.H.; Delillo, T.K.; Horn, M.A.

    1998-01-01

    The method of Muskhelishvili for solving the biharmonic equation using conformal mapping is investigated. In [R.H. Chan, T.K. DeLillo, and M.A. Horn, SIAM J. Sci. Comput., 18 (1997), pp. 1571--1582] it was shown, using the Hankel structure, that the linear system in [N.I. Muskhelishvili, Some Basic Problems of the Mathematical Theory of Elasticity, Noordhoff, Groningen, the Netherlands] is the discretization of the identity plus a compact operator, and therefore the conjugate gradient method will converge superlinearly. Estimates are given here of the superlinear convergence in the cases when the boundary curve is analytic or in a Hoelder class.

  17. A neuro approach to solve fuzzy Riccati differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahrir, Mohammad Shazri, E-mail: mshazri@gmail.com; Telekom Malaysia, R&D TM Innovation Centre, LingkaranTeknokrat Timur, 63000 Cyberjaya, Selangor; Kumaresan, N., E-mail: drnk2008@gmail.com

    There are many applications of optimal control theory especially in the area of control systems in engineering. In this paper, fuzzy quadratic Riccati differential equation is estimated using neural networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). The solution can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that NN approach shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over RK4.

  18. Employing Theories Far beyond Their Limits - Linear Dichroism Theory.

    PubMed

    Mayerhöfer, Thomas G

    2018-05-15

    Using linear polarized light, it is possible in case of ordered structures, such as stretched polymers or single crystals, to determine the orientation of the transition moments of electronic and vibrational transitions. This not only helps to resolve overlapping bands, but also assigning the symmetry species of the transitions and to elucidate the structure. To perform spectral evaluation quantitatively, a sometimes "Linear Dichroism Theory" called approach is very often used. This approach links the relative orientation of the transition moment and polarization direction to the quantity absorbance. This linkage is highly questionable for several reasons. First of all, absorbance is a quantity that is by its definition not compatible with Maxwell's equations. Furthermore, absorbance seems not to be the quantity which is generally compatible with linear dichroism theory. In addition, linear dichroism theory disregards that it is not only the angle between transition moment and polarization direction, but also the angle between sample surface and transition moment, that influences band shape and intensity. Accordingly, the often invoked "magic angle" has never existed and the orientation distribution influences spectra to a much higher degree than if linear dichroism theory would hold strictly. A last point that is completely ignored by linear dichroism theory is the fact that partially oriented or randomly-oriented samples usually consist of ordered domains. It is their size relative to the wavelength of light that can also greatly influence a spectrum. All these findings can help to elucidate orientation to a much higher degree by optical methods than currently thought possible by the users of linear dichroism theory. Hence, it is the goal of this contribution to point out these shortcomings of linear dichroism theory to its users to stimulate efforts to overcome the long-lasting stagnation of this important field. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Estimation and Analysis of Nonlinear Stochastic Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Marcus, S. I.

    1975-01-01

    The algebraic and geometric structures of certain classes of nonlinear stochastic systems were exploited in order to obtain useful stability and estimation results. The class of bilinear stochastic systems (or linear systems with multiplicative noise) was discussed. The stochastic stability of bilinear systems driven by colored noise was considered. Approximate methods for obtaining sufficient conditions for the stochastic stability of bilinear systems evolving on general Lie groups were discussed. Two classes of estimation problems involving bilinear systems were considered. It was proved that, for systems described by certain types of Volterra series expansions or by certain bilinear equations evolving on nilpotent or solvable Lie groups, the optimal conditional mean estimator consists of a finite dimensional nonlinear set of equations. The theory of harmonic analysis was used to derive suboptimal estimators for bilinear systems driven by white noise which evolve on compact Lie groups or homogeneous spaces.

  20. On the rate of convergence of the alternating projection method in finite dimensional spaces

    NASA Astrophysics Data System (ADS)

    Galántai, A.

    2005-10-01

    Using the results of Smith, Solmon, and Wagner [K. Smith, D. Solomon, S. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Soc. 83 (1977) 1227-1270] and Nelson and Neumann [S. Nelson, M. Neumann, Generalizations of the projection method with application to SOR theory for Hermitian positive semidefinite linear systems, Numer. Math. 51 (1987) 123-141] we derive new estimates for the speed of the alternating projection method and its relaxed version in . These estimates can be computed in at most O(m3) arithmetic operations unlike the estimates in papers mentioned above that require spectral information. The new and old estimates are equivalent in many practical cases. In cases when the new estimates are weaker, the numerical testing indicates that they approximate the original bounds in papers mentioned above quite well.

  1. Application of semiempirical electronic structure theory to compute the force generated by a single surface-mounted switchable rotaxane.

    PubMed

    Sohlberg, Karl; Bazargan, Gloria; Angelo, Joseph P; Lee, Choongkeun

    2017-01-01

    Herein we report a study of the switchable [3]rotaxane reported by Huang et al. (Appl Phys Lett 85(22):5391-5393, 1) that can be mounted to a surface to form a nanomechanical, linear, molecular motor. We demonstrate the application of semiempirical electronic structure theory to predict the average and instantaneous force generated by redox-induced ring shuttling. Detailed analysis of the geometric and electronic structure of the system reveals technical considerations essential to success of the approach. The force is found to be in the 100-200 pN range, consistent with published experimental estimates. Graphical Abstract A single surface-mounted switchable rotaxane.

  2. Communication theory of quantum systems. Ph.D. Thesis, 1970

    NASA Technical Reports Server (NTRS)

    Yuen, H. P. H.

    1971-01-01

    Communication theory problems incorporating quantum effects for optical-frequency applications are discussed. Under suitable conditions, a unique quantum channel model corresponding to a given classical space-time varying linear random channel is established. A procedure is described by which a proper density-operator representation applicable to any receiver configuration can be constructed directly from the channel output field. Some examples illustrating the application of our methods to the development of optical quantum channel representations are given. Optimizations of communication system performance under different criteria are considered. In particular, certain necessary and sufficient conditions on the optimal detector in M-ary quantum signal detection are derived. Some examples are presented. Parameter estimation and channel capacity are discussed briefly.

  3. A Linear Theory for Inflatable Plates of Arbitrary Shape

    NASA Technical Reports Server (NTRS)

    McComb, Harvey G., Jr.

    1961-01-01

    A linear small-deflection theory is developed for the elastic behavior of inflatable plates of which Airmat is an example. Included in the theory are the effects of a small linear taper in the depth of the plate. Solutions are presented for some simple problems in the lateral deflection and vibration of constant-depth rectangular inflatable plates.

  4. Ocean tides for satellite geodesy

    NASA Technical Reports Server (NTRS)

    Dickman, S. R.

    1990-01-01

    Spherical harmonic tidal solutions have been obtained at the frequencies of the 32 largest luni-solar tides using prior theory of the author. That theory was developed for turbulent, nonglobal, self-gravitating, and loading oceans possessing realistic bathymetry and linearized bottom friction; the oceans satisfy no-flow boundary conditions at coastlines. In this theory the eddy viscosity and bottom drag coefficients are treated as spatially uniform. Comparison of the predicted degree-2 components of the Mf, P1, and M2 tides with those from numerical and satellite-based tide models allows the ocean friction parameters to be estimated at long and short periods. Using the 32 tide solutions, the frequency dependence of tidal admittance is investigated, and the validity of sideband tide models used in satellite orbit analysis is examined. The implications of admittance variability for oceanic resonances are also explored.

  5. Controlling the non-linear intracavity dynamics of large He-Ne laser gyroscopes

    NASA Astrophysics Data System (ADS)

    Cuccato, D.; Beghi, A.; Belfi, J.; Beverini, N.; Ortolan, A.; Di Virgilio, A.

    2014-02-01

    A model based on Lamb's theory of gas lasers is applied to a He-Ne ring laser (RL) gyroscope to estimate and remove the laser dynamics contribution from the rotation measurements. The intensities of the counter-propagating laser beams exiting one cavity mirror are continuously observed together with a monitor of the laser population inversion. These observables, once properly calibrated with a dedicated procedure, allow us to estimate cold cavity and active medium parameters driving the main part of the non-linearities of the system. The quantitative estimation of intrinsic non-reciprocal effects due to cavity and active medium non-linear coupling plays a key role in testing fundamental symmetries of space-time with RLs. The parameter identification and noise subtraction procedure has been verified by means of a Monte Carlo study of the system, and experimentally tested on the G-PISA RL oriented with the normal to the ring plane almost parallel to the Earth's rotation axis. In this configuration the Earth's rotation rate provides the maximum Sagnac effect while the contribution of the orientation error is reduced to a minimum. After the subtraction of laser dynamics by a Kalman filter, the relative systematic errors of G-PISA reduce from 50 to 5 parts in 103 and can be attributed to the residual uncertainties on geometrical scale factor and orientation of the ring.

  6. A statistical methodology for estimating transport parameters: Theory and applications to one-dimensional advectivec-dispersive systems

    USGS Publications Warehouse

    Wagner, Brian J.; Gorelick, Steven M.

    1986-01-01

    A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.

  7. Broadband implementation of coprime linear microphone arrays for direction of arrival estimation.

    PubMed

    Bush, Dane; Xiang, Ning

    2015-07-01

    Coprime arrays represent a form of sparse sensing which can achieve narrow beams using relatively few elements, exceeding the spatial Nyquist sampling limit. The purpose of this paper is to expand on and experimentally validate coprime array theory in an acoustic implementation. Two nested sparse uniform linear subarrays with coprime number of elements ( M and N) each produce grating lobes that overlap with one another completely in just one direction. When the subarray outputs are combined it is possible to retain the shared beam while mostly canceling the other superfluous grating lobes. In this way a small number of microphones ( N+M-1) creates a narrow beam at higher frequencies, comparable to a densely populated uniform linear array of MN microphones. In this work beampatterns are simulated for a range of single frequencies, as well as bands of frequencies. Narrowband experimental beampatterns are shown to correspond with simulated results even at frequencies other than the arrays design frequency. Narrowband side lobe locations are shown to correspond to the theoretical values. Side lobes in the directional pattern are mitigated by increasing bandwidth of analyzed signals. Direction of arrival estimation is also implemented for two simultaneous noise sources in a free field condition.

  8. Estimating epidemic arrival times using linear spreading theory

    NASA Astrophysics Data System (ADS)

    Chen, Lawrence M.; Holzer, Matt; Shapiro, Anne

    2018-01-01

    We study the dynamics of a spatially structured model of worldwide epidemics and formulate predictions for arrival times of the disease at any city in the network. The model is composed of a system of ordinary differential equations describing a meta-population susceptible-infected-recovered compartmental model defined on a network where each node represents a city and the edges represent the flight paths connecting cities. Making use of the linear determinacy of the system, we consider spreading speeds and arrival times in the system linearized about the unstable disease free state and compare these to arrival times in the nonlinear system. Two predictions are presented. The first is based upon expansion of the heat kernel for the linearized system. The second assumes that the dominant transmission pathway between any two cities can be approximated by a one dimensional lattice or a homogeneous tree and gives a uniform prediction for arrival times independent of the specific network features. We test these predictions on a real network describing worldwide airline traffic.

  9. On the Stationarity of Multiple Autoregressive Approximants: Theory and Algorithms

    DTIC Science & Technology

    1976-08-01

    a I (3.4) Hannan and Terrell (1972) consider problems of a similar nature. Efficient estimates A(1),... , A(p) , and i of A(1)... ,A(p) and...34Autoregressive model fitting for control, Ann . Inst. Statist. Math., 23, 163-180. Hannan, E. J. (1970), Multiple Time Series, New York, John Wiley...Hannan, E. J. and Terrell , R. D. (1972), "Time series regression with linear constraints, " International Economic Review, 13, 189-200. Masani, P

  10. An ab initio study of HCuCO

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.

    1994-01-01

    HCuCO is studied using a large Gaussian basis set at the coupled cluster singles and doubles level of theory, including a perturbational estimate of the connected triples (CCSD(T)). In contrast with CuCO, HCuCO is linear. The Cu-CO bond in HCuCO is significantly stronger than in CuCO. These differences between HCuCO and CuCO are discussed in terms of theCu-H bond polarizing the Cu 4s electron away from the CO.

  11. Flows of dioxins and furans in coastal food webs: inverse modeling, sensitivity analysis, and applications of linear system theory.

    PubMed

    Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer

    2006-01-01

    Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.

  12. Status of linear boundary-layer stability and the e to the nth method, with emphasis on swept-wing applications

    NASA Technical Reports Server (NTRS)

    Hefner, J. N.; Bushnell, D. M.

    1980-01-01

    The-state-of-the-art for the application of linear stability theory and the e to the nth power method for transition prediction and laminar flow control design are summarized, with analyses of previously published low disturbance, swept wing data presented. For any set of transition data with similar stream distrubance levels and spectra, the e to the nth power method for estimating the beginning of transition works reasonably well; however, the value of n can vary significantly, depending upon variations in disturbance field or receptivity. Where disturbance levels are high, the values of n are appreciably below the usual average value of 9 to 10 obtained for relatively low disturbance levels. It is recommended that the design of laminar flow control systems be based on conservative estimates of n and that, in considering the values of n obtained from different analytical approaches or investigations, the designer explore the various assumptions which entered into the analyses.

  13. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  14. A simple model of ultrasound propagation in a cavitating liquid. Part I: Theory, nonlinear attenuation and traveling wave generation.

    PubMed

    Louisnard, O

    2012-01-01

    The bubbles involved in sonochemistry and other applications of cavitation oscillate inertially. A correct estimation of the wave attenuation in such bubbly media requires a realistic estimation of the power dissipated by the oscillation of each bubble, by thermal diffusion in the gas and viscous friction in the liquid. Both quantities and calculated numerically for a single inertial bubble driven at 20 kHz, and are found to be several orders of magnitude larger than the linear prediction. Viscous dissipation is found to be the predominant cause of energy loss for bubbles small enough. Then, the classical nonlinear Caflish equations describing the propagation of acoustic waves in a bubbly liquid are recast and simplified conveniently. The main harmonic part of the sound field is found to fulfill a nonlinear Helmholtz equation, where the imaginary part of the squared wave number is directly correlated with the energy lost by a single bubble. For low acoustic driving, linear theory is recovered, but for larger drivings, namely above the Blake threshold, the attenuation coefficient is found to be more than 3 orders of magnitude larger then the linear prediction. A huge attenuation of the wave is thus expected in regions where inertial bubbles are present, which is confirmed by numerical simulations of the nonlinear Helmholtz equation in a 1D standing wave configuration. The expected strong attenuation is not only observed but furthermore, the examination of the phase between the pressure field and its gradient clearly demonstrates that a traveling wave appears in the medium. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Redshift-space distortions around voids

    NASA Astrophysics Data System (ADS)

    Cai, Yan-Chuan; Taylor, Andy; Peacock, John A.; Padilla, Nelson

    2016-11-01

    We have derived estimators for the linear growth rate of density fluctuations using the cross-correlation function (CCF) of voids and haloes in redshift space. In linear theory, this CCF contains only monopole and quadrupole terms. At scales greater than the void radius, linear theory is a good match to voids traced out by haloes; small-scale random velocities are unimportant at these radii, only tending to cause small and often negligible elongation of the CCF near its origin. By extracting the monopole and quadrupole from the CCF, we measure the linear growth rate without prior knowledge of the void profile or velocity dispersion. We recover the linear growth parameter β to 9 per cent precision from an effective volume of 3( h-1Gpc)3 using voids with radius >25 h-1Mpc. Smaller voids are predominantly sub-voids, which may be more sensitive to the random velocity dispersion; they introduce noise and do not help to improve measurements. Adding velocity dispersion as a free parameter allows us to use information at radii as small as half of the void radius. The precision on β is reduced to 5 per cent. Voids show diverse shapes in redshift space, and can appear either elongated or flattened along the line of sight. This can be explained by the competing amplitudes of the local density contrast, plus the radial velocity profile and its gradient. The distortion pattern is therefore determined solely by the void profile and is different for void-in-cloud and void-in-void. This diversity of redshift-space void morphology complicates measurements of the Alcock-Paczynski effect using voids.

  16. Quasi-linear theory via the cumulant expansion approach

    NASA Technical Reports Server (NTRS)

    Jones, F. C.; Birmingham, T. J.

    1974-01-01

    The cumulant expansion technique of Kubo was used to derive an intergro-differential equation for f , the average one particle distribution function for particles being accelerated by electric and magnetic fluctuations of a general nature. For a very restricted class of fluctuations, the f equation degenerates exactly to a differential equation of Fokker-Planck type. Quasi-linear theory, including the adiabatic assumption, is an exact theory for this limited class of fluctuations. For more physically realistic fluctuations, however, quasi-linear theory is at best approximate.

  17. Flatness-based control and Kalman filtering for a continuous-time macroeconomic model

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.

    2017-11-01

    The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.

  18. Ion radial diffusion in an electrostatic impulse model for stormtime ring current formation

    NASA Technical Reports Server (NTRS)

    Chen, Margaret W.; Schulz, Michael; Lyons, Larry R.; Gorney, David J.

    1992-01-01

    Two refinements to the quasi-linear theory of ion radial diffusion are proposed and examined analytically with simulations of particle trajectories. The resonance-broadening correction by Dungey (1965) is applied to the quasi-linear diffusion theory by Faelthammar (1965) for an individual model storm. Quasi-linear theory is then applied to the mean diffusion coefficients resulting from simulations of particle trajectories in 20 model storms. The correction for drift-resonance broadening results in quasi-linear diffusion coefficients with discrepancies from the corresponding simulated values that are reduced by a factor of about 3. Further reductions in the discrepancies are noted following the averaging of the quasi-linear diffusion coefficients, the simulated coefficients, and the resonance-broadened coefficients for the 20 storms. Quasi-linear theory provides good descriptions of particle transport for a single storm but performs even better in conjunction with the present ensemble-averaging.

  19. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  20. Newton's method: A link between continuous and discrete solutions of nonlinear problems

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.

    1980-01-01

    Newton's method for nonlinear mechanics problems replaces the governing nonlinear equations by an iterative sequence of linear equations. When the linear equations are linear differential equations, the equations are usually solved by numerical methods. The iterative sequence in Newton's method can exhibit poor convergence properties when the nonlinear problem has multiple solutions for a fixed set of parameters, unless the iterative sequences are aimed at solving for each solution separately. The theory of the linear differential operators is often a better guide for solution strategies in applying Newton's method than the theory of linear algebra associated with the numerical analogs of the differential operators. In fact, the theory for the differential operators can suggest the choice of numerical linear operators. In this paper the method of variation of parameters from the theory of linear ordinary differential equations is examined in detail in the context of Newton's method to demonstrate how it might be used as a guide for numerical solutions.

  1. Three-dimensional analysis of magnetometer array data

    NASA Technical Reports Server (NTRS)

    Richmond, A. D.; Baumjohann, W.

    1984-01-01

    A technique is developed for mapping magnetic variation fields in three dimensions using data from an array of magnetometers, based on the theory of optimal linear estimation. The technique is applied to data from the Scandinavian Magnetometer Array. Estimates of the spatial power spectra for the internal and external magnetic variations are derived, which in turn provide estimates of the spatial autocorrelation functions of the three magnetic variation components. Statistical errors involved in mapping the external and internal fields are quantified and displayed over the mapping region. Examples of field mapping and of separation into external and internal components are presented. A comparison between the three-dimensional field separation and a two-dimensional separation from a single chain of stations shows that significant differences can arise in the inferred internal component.

  2. Cosmological Perturbation Theory and the Spherical Collapse model - I. Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Fosalba, Pablo; Gaztanaga, Enrique

    1998-12-01

    We present a simple and intuitive approximation for solving the perturbation theory (PT) of small cosmic fluctuations. We consider only the spherically symmetric or monopole contribution to the PT integrals, which yields the exact result for tree-graphs (i.e. at leading order). We find that the non-linear evolution in Lagrangian space is then given by a simple local transformation over the initial conditions, although it is not local in Euler space. This transformation is found to be described by the spherical collapse (SC) dynamics, as it is the exact solution in the shearless (and therefore local) approximation in Lagrangian space. Taking advantage of this property, it is straightforward to derive the one-point cumulants, xi_J, for both the unsmoothed and smoothed density fields to arbitrary order in the perturbative regime. To leading-order this reproduces, and provides us with a simple explanation for, the exact results obtained by Bernardeau. We then show that the SC model leads to accurate estimates for the next corrective terms when compared with the results derived in the exact perturbation theory making use of the loop calculations. The agreement is within a few per cent for the hierarchical ratios S_J=xi_J/xi^J-1_2. We compare our analytic results with N-body simulations, which turn out to be in very good agreement up to scales where sigma~1. A similar treatment is presented to estimate higher order corrections in the Zel'dovich approximation. These results represent a powerful and readily usable tool to produce analytical predictions that describe the gravitational clustering of large-scale structure in the weakly non-linear regime.

  3. New formulations for tsunami runup estimation

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Aydin, B.; Ceylan, N.

    2017-12-01

    We evaluate shoreline motion and maximum runup in two folds: One, we use linear shallow water-wave equations over a sloping beach and solve as initial-boundary value problem similar to the nonlinear solution of Aydın and Kanoglu (2017, Pure Appl. Geophys., https://doi.org/10.1007/s00024-017-1508-z). Methodology we present here is simple; it involves eigenfunction expansion and, hence, avoids integral transform techniques. We then use several different types of initial wave profiles with and without initial velocity, estimate shoreline properties and confirm classical runup invariance between linear and nonlinear theories. Two, we use the nonlinear shallow water-wave solution of Kanoglu (2004, J. Fluid Mech. 513, 363-372) to estimate maximum runup. Kanoglu (2004) presented a simple integral solution for the nonlinear shallow water-wave equations using the classical Carrier and Greenspan transformation, and further extended shoreline position and velocity to a simpler integral formulation. In addition, Tinti and Tonini (2005, J. Fluid Mech. 535, 33-64) defined initial condition in a very convenient form for near-shore events. We use Tinti and Tonini (2005) type initial condition in Kanoglu's (2004) shoreline integral solution, which leads further simplified estimates for shoreline position and velocity, i.e. algebraic relation. We then use this algebraic runup estimate to investigate effect of earthquake source parameters on maximum runup and present results similar to Sepulveda and Liu (2016, Coast. Eng. 112, 57-68).

  4. Going through a quantum phase

    NASA Technical Reports Server (NTRS)

    Shapiro, Jeffrey H.

    1992-01-01

    Phase measurements on a single-mode radiation field are examined from a system-theoretic viewpoint. Quantum estimation theory is used to establish the primacy of the Susskind-Glogower (SG) phase operator; its phase eigenkets generate the probability operator measure (POM) for maximum likelihood phase estimation. A commuting observables description for the SG-POM on a signal x apparatus state space is derived. It is analogous to the signal-band x image-band formulation for optical heterodyne detection. Because heterodyning realizes the annihilation operator POM, this analogy may help realize the SG-POM. The wave function representation associated with the SG POM is then used to prove the duality between the phase measurement and the number operator measurement, from which a number-phase uncertainty principle is obtained, via Fourier theory, without recourse to linearization. Fourier theory is also employed to establish the principle of number-ket causality, leading to a Paley-Wiener condition that must be satisfied by the phase-measurement probability density function (PDF) for a single-mode field in an arbitrary quantum state. Finally, a two-mode phase measurement is shown to afford phase-conjugate quantum communication at zero error probability with finite average photon number. Application of this construct to interferometric precision measurements is briefly discussed.

  5. A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.

    We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ''Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbationsmore » that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.« less

  6. A comparison of methods for DPLL loop filter design

    NASA Technical Reports Server (NTRS)

    Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.

    1986-01-01

    Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.

  7. Homogenization via the strong-permittivity-fluctuation theory with nonzero depolarization volume

    NASA Astrophysics Data System (ADS)

    Mackay, Tom G.

    2004-08-01

    The depolarization dyadic provides the scattering response of a single inclusion particle embedded within a homogenous background medium. These dyadics play a central role in formalisms used to estimate the effective constitutive parameters of homogenized composite mediums (HCMs). Conventionally, the inclusion particle is taken to be vanishingly small; this allows the pointwise singularity of the dyadic Green function associated with the background medium to be employed as the depolarization dyadic. A more accurate approach is pursued in this communication by taking into account the nonzero spatial extent of inclusion particles. Depolarization dyadics corresponding to inclusion particles of nonzero volume are incorporated within the strong-permittivity-fluctuation theory (SPFT). The linear dimensions of inclusion particles are assumed to be small relative to the electromagnetic wavelength(s) and the SPFT correlation length. The influence of the size of inclusion particles upon SPFT estimates of the HCM constitutive parameters is investigated for anisotropic dielectric HCMs.In particular, the interplay between correlation length and inclusion size is explored.

  8. Cosmological perturbation theory and the spherical collapse model - II. Non-Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Gaztanaga, Enrique; Fosalba, Pablo

    1998-12-01

    In Paper I of this series, we introduced the spherical collapse (SC) approximation in Lagrangian space as a way of estimating the cumulants xi_J of density fluctuations in cosmological perturbation theory (PT). Within this approximation, the dynamics is decoupled from the statistics of the initial conditions, so we are able to present here the cumulants for generic non-Gaussian initial conditions, which can be estimated to arbitrary order including the smoothing effects. The SC model turns out to recover the exact leading-order non-linear contributions up to terms involving non-local integrals of the J-point functions. We argue that for the hierarchical ratios S_J, these non-local terms are subdominant and tend to compensate each other. The resulting predictions show a non-trivial time evolution that can be used to discriminate between models of structure formation. We compare these analytic results with non-Gaussian N-body simulations, which turn out to be in very good agreement up to scales where sigma<~1.

  9. Steering of Frequency Standards by the Use of Linear Quadratic Gaussian Control Theory

    NASA Technical Reports Server (NTRS)

    Koppang, Paul; Leland, Robert

    1996-01-01

    Linear quadratic Gaussian control is a technique that uses Kalman filtering to estimate a state vector used for input into a control calculation. A control correction is calculated by minimizing a quadratic cost function that is dependent on both the state vector and the control amount. Different penalties, chosen by the designer, are assessed by the controller as the state vector and control amount vary from given optimal values. With this feature controllers can be designed to force the phase and frequency differences between two standards to zero either more or less aggressively depending on the application. Data will be used to show how using different parameters in the cost function analysis affects the steering and the stability of the frequency standards.

  10. Experimental investigation of three-wave interactions of capillary surface-waves

    NASA Astrophysics Data System (ADS)

    Berhanu, Michael; Cazaubiel, Annette; Deike, Luc; Jamin, Timothee; Falcon, Eric

    2014-11-01

    We report experiments studying the non-linear interaction between two crossing wave-trains of gravity-capillary surface waves generated in a closed laboratory tank. Using a capacitive wave gauge and Diffusive Light Photography method, we detect a third wave of smaller amplitude whose frequency and wavenumber are in agreement with the weakly non-linear triadic resonance interaction mechanism. By performing experiments in stationary and transient regimes and taking into account the viscous dissipation, we estimate directly the growth rate of the resonant mode in comparison with theory. These results confirm at least qualitatively and extend earlier experimental results obtained only for unidirectional wave train. Finally we discuss relevance of three-wave interaction mechanisms in recent experiment studying capillary wave turbulence.

  11. An algorithm for the numerical solution of linear differential games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polovinkin, E S; Ivanov, G E; Balashov, M V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented andmore » estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.« less

  12. Double power series method for approximating cosmological perturbations

    NASA Astrophysics Data System (ADS)

    Wren, Andrew J.; Malik, Karim A.

    2017-04-01

    We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.

  13. A closed form slug test theory for high permeability aquifers.

    PubMed

    Ostendorf, David W; DeGroot, Don J; Dunaj, Philip J; Jakubowski, Joseph

    2005-01-01

    We incorporate a linear estimate of casing friction into the analytical slug test theory of Springer and Gelhar (1991) for high permeability aquifers. The modified theory elucidates the influence of inertia and casing friction on consistent, closed form equations for the free surface, pressure, and velocity fluctuations for overdamped and underdamped conditions. A consistent, but small, correction for kinetic energy is included as well. A characteristic velocity linearizes the turbulent casing shear stress so that an analytical solution for attenuated, phase shifted pressure fluctuations fits a single parameter (damping frequency) to transducer data from any depth in the casing. Underdamped slug tests of 0.3, 0.6, and 1 m amplitudes at five transducer depths in a 5.1 cm diameter PVC well 21 m deep in the Plymouth-Carver Aquifer yield a consistent hydraulic conductivity of 1.5 x 10(-3) m/s. The Springer and Gelhar (1991) model underestimates the hydraulic conductivity for these tests by as much as 25% by improperly ascribing smooth turbulent casing friction to the aquifer. The match point normalization of Butler (1998) agrees with our fitted hydraulic conductivity, however, when friction is included in the damping frequency. Zurbuchen et al. (2002) use a numerical model to establish a similar sensitivity of hydraulic conductivity to nonlinear casing friction.

  14. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    PubMed

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.

  15. Assessment of pollutant mean concentrations in the Yangtze estuary based on MSN theory.

    PubMed

    Ren, Jing; Gao, Bing-Bo; Fan, Hai-Mei; Zhang, Zhi-Hong; Zhang, Yao; Wang, Jin-Feng

    2016-12-15

    Reliable assessment of water quality is a critical issue for estuaries. Nutrient concentrations show significant spatial distinctions between areas under the influence of fresh-sea water interaction and anthropogenic effects. For this situation, given the limitations of general mean estimation approaches, a new method for surfaces with non-homogeneity (MSN) was applied to obtain optimized linear unbiased estimations of the mean nutrient concentrations in the study area in the Yangtze estuary from 2011 to 2013. Other mean estimation methods, including block Kriging (BK), simple random sampling (SS) and stratified sampling (ST) inference, were applied simultaneously for comparison. Their performance was evaluated by estimation error. The results show that MSN had the highest accuracy, while SS had the highest estimation error. ST and BK were intermediate in terms of their performance. Thus, MSN is an appropriate method that can be adopted to reduce the uncertainty of mean pollutant estimation in estuaries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. A Comparison Between Internal Waves Observed in the Southern Ocean and Lee Wave Generation Theory

    NASA Astrophysics Data System (ADS)

    Nikurashin, M.; Benthuysen, J.; Naveira Garabato, A.; Polzin, K. L.

    2016-02-01

    Direct observations in the Southern Ocean report enhanced internal wave activity and turbulence in a few kilometers above rough bottom topography. The enhancement is co-located with the deep-reaching fronts of the Antarctic Circumpolar Current, suggesting that the internal waves and turbulence are sustained by near-bottom flows interacting with rough topography. Recent numerical simulations confirm that oceanic flows impinging on rough small-scale topography are very effective generators of internal gravity waves and predict vigorous wave radiation, breaking, and turbulence within a kilometer above bottom. However, a linear lee wave generation theory applied to the observed bottom topography and mean flow characteristics has been shown to overestimate the observed rates of the turbulent energy dissipation. In this study, we compare the linear lee wave theory with the internal wave kinetic energy estimated from finestructure data collected as part of the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES). We show that the observed internal wave kinetic energy levels are generally in agreement with the theory. Consistent with the lee wave theory, the observed internal wave kinetic energy scales quadratically with the mean flow speed, stratification, and topographic roughness. The correlation coefficient between the observed internal wave kinetic energy and mean flow and topography parameters reaches 0.6-0.8 for the 100-800 m vertical wavelengths, consistent with the dominant lee wave wavelengths, and drops to 0.2-0.5 for wavelengths outside this range. A better agreement between the lee wave theory and the observed internal wave kinetic energy than the observed turbulent energy dissipation suggests remote breaking of internal waves.

  17. An enstrophy-based linear and nonlinear receptivity theory

    NASA Astrophysics Data System (ADS)

    Sengupta, Aditi; Suman, V. K.; Sengupta, Tapan K.; Bhaumik, Swagata

    2018-05-01

    In the present research, a new theory of instability based on enstrophy is presented for incompressible flows. Explaining instability through enstrophy is counter-intuitive, as it has been usually associated with dissipation for the Navier-Stokes equation (NSE). This developed theory is valid for both linear and nonlinear stages of disturbance growth. A previously developed nonlinear theory of incompressible flow instability based on total mechanical energy described in the work of Sengupta et al. ["Vortex-induced instability of an incompressible wall-bounded shear layer," J. Fluid Mech. 493, 277-286 (2003)] is used to compare with the present enstrophy based theory. The developed equations for disturbance enstrophy and disturbance mechanical energy are derived from NSE without any simplifying assumptions, as compared to other classical linear/nonlinear theories. The theory is tested for bypass transition caused by free stream convecting vortex over a zero pressure gradient boundary layer. We explain the creation of smaller scales in the flow by a cascade of enstrophy, which creates rotationality, in general inhomogeneous flows. Linear and nonlinear versions of the theory help explain the vortex-induced instability problem under consideration.

  18. Comparison of Linear Induction Motor Theories for the LIMRV and TLRV Motors

    DOT National Transportation Integrated Search

    1978-01-01

    The Oberretl, Yamamura, and Mosebach theories of the linear induction motor are described and also applied to predict performance characteristics of the TLRV & LIMRV linear induction motors. The effect of finite motor width and length on performance ...

  19. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    PubMed

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  20. Linear stability theory and three-dimensional boundary layer transition

    NASA Technical Reports Server (NTRS)

    Spall, Robert E.; Malik, Mujeeb R.

    1992-01-01

    The viewgraphs and discussion of linear stability theory and three dimensional boundary layer transition are provided. The ability to predict, using analytical tools, the location of boundary layer transition over aircraft-type configurations is of great importance to designers interested in laminar flow control (LFC). The e(sup N) method has proven to be fairly effective in predicting, in a consistent manner, the location of the onset of transition for simple geometries in low disturbance environments. This method provides a correlation between the most amplified single normal mode and the experimental location of the onset of transition. Studies indicate that values of N between 8 and 10 correlate well with the onset of transition. For most previous calculations, the mean flows were restricted to two-dimensional or axisymmetric cases, or have employed simple three-dimensional mean flows (e.g., rotating disk, infinite swept wing, or tapered swept wing with straight isobars). Unfortunately, for flows over general wing configurations, and for nearly all flows over fuselage-type bodies at incidence, the analysis of fully three-dimensional flow fields is required. Results obtained for the linear stability of fully three-dimensional boundary layers formed over both wing and fuselage-type geometries, and for both high and low speed flows are discussed. When possible, transition estimates form the e(sup N) method are compared to experimentally determined locations. The stability calculations are made using a modified version of the linear stability code COSAL. Mean flows were computed using both Navier Stokes and boundary-layer codes.

  1. Estimation on nonlinear damping in second order distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1989-01-01

    An approximation and convergence theory for the identification of nonlinear damping in abstract wave equations is developed. It is assumed that the unknown dissipation mechanism to be identified can be described by a maximal monotone operator acting on the generalized velocity. The stiffness is assumed to be linear and symmetric. Functional analytic techniques are used to establish that solutions to a sequence of finite dimensional (Galerkin) approximating identification problems in some sense approximate a solution to the original infinite dimensional inverse problem.

  2. Effects of shock on hypersonic boundary layer stability

    NASA Astrophysics Data System (ADS)

    Pinna, F.; Rambaud, P.

    2013-06-01

    The design of hypersonic vehicles requires the estimate of the laminar to turbulent transition location for an accurate sizing of the thermal protection system. Linear stability theory is a fast scientific way to study the problem. Recent improvements in computational capabilities allow computing the flow around a full vehicle instead of using only simplified boundary layer equations. In this paper, the effect of the shock is studied on a mean flow provided by steady Computational Fluid Dynamics (CFD) computations and simplified boundary layer calculations.

  3. Superresolution restoration of an image sequence: adaptive filtering approach.

    PubMed

    Elad, M; Feuer, A

    1999-01-01

    This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented.

  4. Spatial resolution of the electrical conductance of ionic fluids using a Green-Kubo method.

    PubMed

    Jones, R E; Ward, D K; Templeton, J A

    2014-11-14

    We present a Green-Kubo method to spatially resolve transport coefficients in compositionally heterogeneous mixtures. We develop the underlying theory based on well-known results from mixture theory, Irving-Kirkwood field estimation, and linear response theory. Then, using standard molecular dynamics techniques, we apply the methodology to representative systems. With a homogeneous salt water system, where the expectation of the distribution of conductivity is clear, we demonstrate the sensitivities of the method to system size, and other physical and algorithmic parameters. Then we present a simple model of an electrochemical double layer where we explore the resolution limit of the method. In this system, we observe significant anisotropy in the wall-normal vs. transverse ionic conductances, as well as near wall effects. Finally, we discuss extensions and applications to more realistic systems such as batteries where detailed understanding of the transport properties in the vicinity of the electrodes is of technological importance.

  5. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  6. Simulation and analysis of chemical release in the ionosphere

    NASA Astrophysics Data System (ADS)

    Gao, Jing-Fan; Guo, Li-Xin; Xu, Zheng-Wen; Zhao, Hai-Sheng; Feng, Jie

    2018-05-01

    Ionospheric inhomogeneous plasma produced by single point chemical release has simple space-time structure, and cannot impact radio wave frequencies higher than Very High Frequency (VHF) band. In order to produce more complicated ionospheric plasma perturbation structure and trigger instabilities phenomena, multiple-point chemical release scheme is presented in this paper. The effects of chemical release on low latitude ionospheric plasma are estimated by linear instability growth rate theory that high growth rate represents high irregularities, ionospheric scintillation occurrence probability and high scintillation intension in scintillation duration. The amplitude scintillations and the phase scintillations of 150 MHz, 400 MHz, and 1000 MHz are calculated based on the theory of multiple phase screen (MPS), when they propagate through the disturbed area.

  7. Estimation of the ARNO model baseflow parameters using daily streamflow data

    NASA Astrophysics Data System (ADS)

    Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu

    1999-09-01

    An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.

  8. How does non-linear dynamics affect the baryon acoustic oscillation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugiyama, Naonori S.; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: dns@astro.princeton.edu

    2014-02-01

    We study the non-linear behavior of the baryon acoustic oscillation in the power spectrum and the correlation function by decomposing the dark matter perturbations into the short- and long-wavelength modes. The evolution of the dark matter fluctuations can be described as a global coordinate transformation caused by the long-wavelength displacement vector acting on short-wavelength matter perturbation undergoing non-linear growth. Using this feature, we investigate the well known cancellation of the high-k solutions in the standard perturbation theory. While the standard perturbation theory naturally satisfies the cancellation of the high-k solutions, some of the recently proposed improved perturbation theories do notmore » guarantee the cancellation. We show that this cancellation clarifies the success of the standard perturbation theory at the 2-loop order in describing the amplitude of the non-linear power spectrum even at high-k regions. We propose an extension of the standard 2-loop level perturbation theory model of the non-linear power spectrum that more accurately models the non-linear evolution of the baryon acoustic oscillation than the standard perturbation theory. The model consists of simple and intuitive parts: the non-linear evolution of the smoothed power spectrum without the baryon acoustic oscillations and the non-linear evolution of the baryon acoustic oscillations due to the large-scale velocity of dark matter and due to the gravitational attraction between dark matter particles. Our extended model predicts the smoothing parameter of the baryon acoustic oscillation peak at z = 0.35 as ∼ 7.7Mpc/h and describes the small non-linear shift in the peak position due to the galaxy random motions.« less

  9. Azimuthal Seismic Amplitude Variation with Offset and Azimuth Inversion in Weakly Anisotropic Media with Orthorhombic Symmetry

    NASA Astrophysics Data System (ADS)

    Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao

    2018-01-01

    Seismic amplitude variation with offset and azimuth (AVOaz) inversion is well known as a popular and pragmatic tool utilized to estimate fracture parameters. A single set of vertical fractures aligned along a preferred horizontal direction embedded in a horizontally layered medium can be considered as an effective long-wavelength orthorhombic medium. Estimation of Thomsen's weak-anisotropy (WA) parameters and fracture weaknesses plays an important role in characterizing the orthorhombic anisotropy in a weakly anisotropic medium. Our goal is to demonstrate an orthorhombic anisotropic AVOaz inversion approach to describe the orthorhombic anisotropy utilizing the observable wide-azimuth seismic reflection data in a fractured reservoir with the assumption of orthorhombic symmetry. Combining Thomsen's WA theory and linear-slip model, we first derive a perturbation in stiffness matrix of a weakly anisotropic medium with orthorhombic symmetry under the assumption of small WA parameters and fracture weaknesses. Using the perturbation matrix and scattering function, we then derive an expression for linearized PP-wave reflection coefficient in terms of P- and S-wave moduli, density, Thomsen's WA parameters, and fracture weaknesses in such an orthorhombic medium, which avoids the complicated nonlinear relationship between the orthorhombic anisotropy and azimuthal seismic reflection data. Incorporating azimuthal seismic data and Bayesian inversion theory, the maximum a posteriori solutions of Thomsen's WA parameters and fracture weaknesses in a weakly anisotropic medium with orthorhombic symmetry are reasonably estimated with the constraints of Cauchy a priori probability distribution and smooth initial models of model parameters to enhance the inversion resolution and the nonlinear iteratively reweighted least squares strategy. The synthetic examples containing a moderate noise demonstrate the feasibility of the derived orthorhombic anisotropic AVOaz inversion method, and the real data illustrate the inversion stabilities of orthorhombic anisotropy in a fractured reservoir.

  10. CO2 flux determination by closed-chamber methods can be seriously biased by inappropriate application of linear regression

    NASA Astrophysics Data System (ADS)

    Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.

    2007-07-01

    Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach was justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatland sites in Finland and a tundra site in Siberia. The flux measurements were performed using transparent chambers on vegetated surfaces and opaque chambers on bare peat surfaces. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes and even lower for longer closure times. The degree of underestimation increased with increasing CO2 flux strength and is dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.

  11. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  12. Research in Applied Mathematics Related to Mathematical System Theory.

    DTIC Science & Technology

    1977-06-01

    This report deals with research results obtained in the field of mathematical system theory . Special emphasis was given to the following areas: (1...Linear system theory over a field: parametrization of multi-input, multi-output systems and the geometric structure of classes of systems of...constant dimension. (2) Linear systems over a ring: development of the theory for very general classes of rings. (3) Nonlinear system theory : basic

  13. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  14. A reciprocal theorem for a mixture theory. [development of linearized theory of interacting media

    NASA Technical Reports Server (NTRS)

    Martin, C. J.; Lee, Y. M.

    1972-01-01

    A dynamic reciprocal theorem for a linearized theory of interacting media is developed. The constituents of the mixture are a linear elastic solid and a linearly viscous fluid. In addition to Steel's field equations, boundary conditions and inequalities on the material constants that have been shown by Atkin, Chadwick and Steel to be sufficient to guarantee uniqueness of solution to initial-boundary value problems are used. The elements of the theory are given and two different boundary value problems are considered. The reciprocal theorem is derived with the aid of the Laplace transform and the divergence theorem and this section is concluded with a discussion of the special cases which arise when one of the constituents of the mixture is absent.

  15. Study on longitudinal dispersion relation in one-dimensional relativistic plasma: Linear theory and Vlasov simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H.; Wu, S. Z.; Zhou, C. T.

    2013-09-15

    The dispersion relation of one-dimensional longitudinal plasma waves in relativistic homogeneous plasmas is investigated with both linear theory and Vlasov simulation in this paper. From the Vlasov-Poisson equations, the linear dispersion relation is derived for the proper one-dimensional Jüttner distribution. Numerically obtained linear dispersion relation as well as an approximate formula for plasma wave frequency in the long wavelength limit is given. The dispersion of longitudinal wave is also simulated with a relativistic Vlasov code. The real and imaginary parts of dispersion relation are well studied by varying wave number and plasma temperature. Simulation results are in agreement with establishedmore » linear theory.« less

  16. Lie algebras and linear differential equations.

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.; Rahimi, A.

    1972-01-01

    Certain symmetry properties possessed by the solutions of linear differential equations are examined. For this purpose, some basic ideas from the theory of finite dimensional linear systems are used together with the work of Wei and Norman on the use of Lie algebraic methods in differential equation theory.

  17. A theory of fine structure image models with an application to detection and classification of dementia

    PubMed Central

    Penn, Richard; Werner, Michael; Thomas, Justin

    2015-01-01

    Background Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. Methods In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. Results We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Conclusions Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible. PMID:26029638

  18. Stochastic inversion of cross-borehole radar data from metalliferous vein detection

    NASA Astrophysics Data System (ADS)

    Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui

    2017-12-01

    In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.

  19. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  20. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  1. One-dimensional Numerical Model of Transient Discharges in Air of a Spatial Plasma Ignition Device

    NASA Astrophysics Data System (ADS)

    Saceleanu, Florin N.

    This thesis examines the modes of discharge of a plasma ignition device. Oscilloscope data of the discharge voltage and current are analyzed for various pressures in air at ambient temperature. It is determined that the discharge operates in 2 modes: a glow discharge and a postulated streamer discharge. Subsequently, a 1-dimensional fluid simulation of plasma using the finite volume method (FVM) is developed to gain insight into the particle kinetics. Transient results of the simulation agree with theories of electric discharges; however, quasi-steady state results were not reached due to high diffusion time of ions in air. Next, an ordinary differential equation (ODE) is derived to understand the discharge transition. Simulated results were used to estimate the voltage waveform, which describes the ODE's forcing function; additional simulated results were used to estimate the discharge current and the ODE's non-linearity. It is found that the ODE's non-linearity increases exponentially for capacitive discharges. It is postulated that the non-linearity defines the mode transition observed experimentally. The research is motivated by Spatial Plasma Discharge Ignition (SPDI), an innovative ignition system postulated to increase combustion efficiency in automobile engines for up to 9%. The research thus far can only hypothesize SPDI's benefits on combustion, based on the literature review and the modes of discharge.

  2. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  3. Basic Research in the Mathematical Foundations of Stability Theory, Control Theory and Numerical Linear Algebra.

    DTIC Science & Technology

    1979-09-01

    without determinantal divisors, Linear and Multilinear Algebra 7(1979), 107-109. 4. The use of integral operators in number theory (with C. Ryavec and...Gersgorin revisited, to appear in Letters in Linear Algebra. 15. A surprising determinantal inequality for real matrices (with C.R. Johnson), to appear in...Analysis: An Essay Concerning the Limitations of Some Mathematical Methods in the Social , Political and Biological Sciences, David Berlinski, MIT Press

  4. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence J.

    Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less

  6. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  7. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  8. Analytical method to estimate waterflood performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cremonini, A.S.

    A method to predict oil production resulting from the injection of immiscible fluids is described. The method is based on two models: one of them considers the vertical and displacement efficiencies, assuming unit areal efficiency and, therefore, a linear flow. It is a layered model without crossflow in which Buckley-Leveret`s displacement theory is used for each layer. The results obtained in the linear model are applied to a streamchannel model similar to the one used by Higgins and Leighton. In this way, areal efficiency is taken into account. The principal innovation is the possibility of applying different relative permeability curvesmore » to each layer. A numerical example in a five-spot pattern which uses relative permeability data obtained from reservoir core samples is presented.« less

  9. Statistical behavior of ten million experimental detection limits

    NASA Astrophysics Data System (ADS)

    Voigtman, Edward; Abraham, Kevin T.

    2011-02-01

    Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.

  10. Multi-Axis Identifiability Using Single-Surface Parameter Estimation Maneuvers on the X-48B Blended Wing Body

    NASA Technical Reports Server (NTRS)

    Ratnayake, Nalin A.; Koshimoto, Ed T.; Taylor, Brian R.

    2011-01-01

    The problem of parameter estimation on hybrid-wing-body type aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aero- dynamic control effectors that act in coplanar motion. This fact adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of system inputs must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, asymmetric, single-surface maneuvers are used to excite multiple axes of aircraft motion simultaneously. Time history reconstructions of the moment coefficients computed by the solved regression models are then compared to each other in order to assess relative model accuracy. The reduced flight-test time required for inner surface parameter estimation using multi-axis methods was found to come at the cost of slightly reduced accuracy and statistical confidence for linear regression methods. Since the multi-axis maneuvers captured parameter estimates similar to both longitudinal and lateral-directional maneuvers combined, the number of test points required for the inner, aileron-like surfaces could in theory have been reduced by 50%. While trends were similar, however, individual parameters as estimated by a multi-axis model were typically different by an average absolute difference of roughly 15-20%, with decreased statistical significance, than those estimated by a single-axis model. The multi-axis model exhibited an increase in overall fit error of roughly 1-5% for the linear regression estimates with respect to the single-axis model, when applied to flight data designed for each, respectively.

  11. Rouse-Bueche Theory and The Calculation of The Monomeric Friction Coefficient in a Filled System

    NASA Astrophysics Data System (ADS)

    Martinetti, Luca; Macosko, Christopher; Bates, Frank

    According to flexible chain theories of viscoelasticity, all relaxation and retardation times of a polymer melt (hence, any dynamic property such as the diffusion coefficient) depend on the monomeric friction coefficient, ζ0, i.e. the average drag force per monomer per unit velocity encountered by a Gaussian submolecule moving through its free-draining surroundings. Direct experimental access to ζ0 relies on the availability of a suitable polymer dynamics model. Thus far, no method has been suggested that is applicable to filled systems, such as filled rubbers or microphase-segregated A-B-A thermoplastic elastomers at temperatures where one of the blocks is glassy. Building upon the procedure proposed by Ferry for entangled and unfilled polymer melts, the Rouse-Bueche theory is applied to an undiluted triblock copolymer to extract ζ0 from the linear viscoelastic behavior in the rubber-glass transition region, and to estimate the size of Gaussian submolecules. At iso-free volume conditions, the so-obtained matrix monomeric friction factor is consistent with the corresponding value for the homopolymer melt. In addition, the characteristic Rouse dimensions are in good agreement with independent estimates based on the Kratky-Porod worm-like chain model. These results seem to validate the proposed approach for estimating ζ0 in a filled system. Although preliminary tested on a thermoplastic elastomer of the A-B-A type, the method may be extended and applied to filled homopolymers as well.

  12. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  13. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyea, Jan, E-mail: jbeyea@cipi.com

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, hasmore » charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were found consistent and unbiased. • A 1956 genetics report did not hide estimates and does not need investigation for misconduct. • The scientific record was strong for a no-threshold, linear genetic response to radiation.« less

  14. Estimating permeability from quasi-static deformation: Temporal variations and arrival time inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, D.W.; Ferretti, Alessandro; Novali, Fabrizio

    2008-05-01

    Transient pressure variations within a reservoir can be treated as a propagating front and analyzed using an asymptotic formulation. From this perspective one can define a pressure 'arrival time' and formulate solutions along trajectories, in the manner of ray theory. We combine this methodology and a technique for mapping overburden deformation into reservoir volume change as a means to estimate reservoir flow properties, such as permeability. Given the entire 'travel time' or phase field, obtained from the deformation data, we can construct the trajectories directly, there-by linearizing the inverse problem. A numerical study indicates that, using this approach, we canmore » infer large-scale variations in flow properties. In an application to Interferometric Synthetic Aperture (InSAR) observations associated with a CO{sub 2} injection at the Krechba field, Algeria, we image pressure propagation to the northwest. An inversion for flow properties indicates a linear trend of high permeability. The high permeability correlates with a northwest trending fault on the flank of the anticline which defines the field.« less

  15. On the tsunami wave-submerged breakwater interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filianoti, P.; Piscopo, R.

    The tsunami wave loads on a submerged rigid breakwater are inertial. It is the result arising from the simple calculation method here proposed, and it is confirmed by the comparison with results obtained by other researchers. The method is based on the estimate of the speed drop of the tsunami wave passing over the breakwater. The calculation is rigorous for a sinusoidal wave interacting with a rigid submerged obstacle, in the framework of the linear wave theory. This new approach gives a useful and simple tool for estimating tsunami loads on submerged breakwaters.An unexpected novelty come out from a workedmore » example: assuming the same wave height, storm waves are more dangerous than tsunami waves, for the safety against sliding of submerged breakwaters.« less

  16. A control-theory model for human decision-making

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Tanner, R. B.

    1971-01-01

    A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.

  17. Use of digital control theory state space formalism for feedback at SLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himel, T.; Hendrickson, L.; Rouse, F.

    The algorithms used in the database-driven SLC fast-feedback system are based on the state space formalism of digital control theory. These are implemented as a set of matrix equations which use a Kalman filter to estimate a vector of states from a vector of measurements, and then apply a gain matrix to determine the actuator settings from the state vector. The matrices used in the calculation are derived offline using Linear Quadratic Gaussian minimization. For a given noise spectrum, this procedure minimizes the rms of the states (e.g., the position or energy of the beam). The offline program also allowsmore » simulation of the loop's response to arbitrary inputs, and calculates its frequency response. 3 refs., 3 figs.« less

  18. Aging selectively impairs recollection in recognition memory for pictures: Evidence from modeling and ROC curves

    PubMed Central

    Howard, Marc W.; Bessette-Symons, Brandy; Zhang, Yaofei; Hoyer, William J.

    2006-01-01

    Younger and older adults were tested on recognition memory for pictures. The Yonelinas high threshold (YHT) model, a formal implementation of two-process theory, fit the response distribution data of both younger and older adults significantly better than a normal unequal variance signal detection model. Consistent with this finding, non-linear zROC curves were obtained for both groups. Estimates of recollection from the YHT model were significantly higher for younger than older adults. This deficit was not a consequence of a general decline in memory; older adults showed comparable overall accuracy and in fact a non-significant increase in their familiarity scores. Implications of these results for theories of recognition memory and the mnemonic deficit associated with aging are discussed. PMID:16594795

  19. An adaptive Cartesian control scheme for manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  20. A view on thermodynamics of concentrated electrolytes: Modification necessity for electrostatic contribution of osmotic coefficient

    NASA Astrophysics Data System (ADS)

    Sahu, Jyoti; Juvekar, Vinay A.

    2018-05-01

    Prediction of the osmotic coefficient of concentrated electrolytes is needed in a wide variety of industrial applications. There is a need to correctly segregate the electrostatic contribution to osmotic coefficient from nonelectrostatic contribution. This is achieved in a rational way in this work. Using the Robinson-Stokes-Glueckauf hydrated ion model to predict non-electrostatic contribution to the osmotic coefficient, it is shown that hydration number should be independent of concentration so that the observed linear dependence of osmotic coefficient on electrolyte concentration in high concentration range could be predicted. The hydration number of several electrolytes (LiCl, NaCl, KCl, MgCl2, and MgSO4) has been estimated by this method. The hydration number predicted by this model shows correct dependence on temperature. It is also shown that the electrostatic contribution to osmotic coefficient is underpredicted by the Debye-Hückel theory at concentration beyond 0.1 m. The Debye-Hückel theory is modified by introducing a concentration dependent hydrated ionic size. Using the present analysis, it is possible to correctly estimate the electrostatic contribution to the osmotic coefficient, beyond the range of validation of the D-H theory. This would allow development of a more fundamental model for electrostatic interaction at high electrolyte concentrations.

  1. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems

    PubMed Central

    Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.

    2015-01-01

    Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086

  2. Validation of Bayesian analysis of compartmental kinetic models in medical imaging.

    PubMed

    Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M

    2016-10-01

    Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Modelling non-linear effects of dark energy

    NASA Astrophysics Data System (ADS)

    Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis

    2018-04-01

    We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.

  4. A Quasi-Steady Lifting Line Theory for Insect-Like Hovering Flight

    PubMed Central

    Nabawy, Mostafa R. A.; Crowthe, William J.

    2015-01-01

    A novel lifting line formulation is presented for the quasi-steady aerodynamic evaluation of insect-like wings in hovering flight. The approach allows accurate estimation of aerodynamic forces from geometry and kinematic information alone and provides for the first time quantitative information on the relative contribution of induced and profile drag associated with lift production for insect-like wings in hover. The main adaptation to the existing lifting line theory is the use of an equivalent angle of attack, which enables capture of the steady non-linear aerodynamics at high angles of attack. A simple methodology to include non-ideal induced effects due to wake periodicity and effective actuator disc area within the lifting line theory is included in the model. Low Reynolds number effects as well as the edge velocity correction required to account for different wing planform shapes are incorporated through appropriate modification of the wing section lift curve slope. The model has been successfully validated against measurements from revolving wing experiments and high order computational fluid dynamics simulations. Model predicted mean lift to weight ratio results have an average error of 4% compared to values from computational fluid dynamics for eight different insect cases. Application of an unmodified linear lifting line approach leads on average to a 60% overestimation in the mean lift force required for weight support, with most of the discrepancy due to use of linear aerodynamics. It is shown that on average for the eight insects considered, the induced drag contributes 22% of the total drag based on the mean cycle values and 29% of the total drag based on the mid half-stroke values. PMID:26252657

  5. An Introduction to Multilinear Formula Score Theory. Measurement Series 84-4.

    ERIC Educational Resources Information Center

    Levine, Michael V.

    Formula score theory (FST) associates each multiple choice test with a linear operator and expresses all of the real functions of item response theory as linear combinations of the operator's eigenfunctions. Hard measurement problems can then often be reformulated as easier, standard mathematical problems. For example, the problem of estimating…

  6. Lack of Set Theory Relevant Prerequisite Knowledge

    ERIC Educational Resources Information Center

    Dogan-Dunlap, Hamide

    2006-01-01

    Many students struggle with college mathematics topics due to a lack of mastery of prerequisite knowledge. Set theory language is one such prerequisite for linear algebra courses. Many students' mistakes on linear algebra questions reveal a lack of mastery of set theory knowledge. This paper reports the findings of a qualitative analysis of a…

  7. A python framework for environmental model uncertainty analysis

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Doherty, John E.

    2016-01-01

    We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification.

  8. Identifiability Results for Several Classes of Linear Compartment Models.

    PubMed

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  9. Estimation and Control with Relative Measurements: Algorithms and Scaling Laws

    DTIC Science & Technology

    2007-09-01

    eigenvector of L −1 corre- sponding to its largest eigenvalue. Since L−1 is a positive matrix, Perron - Frobenius theory tells us that |u1| := {|u11...the Frobenius norm of a matrix, and a linear vector space SV as the space of all bounded node-functions with respect to the above defined 144 norm...je‖2F where Eu is the set edges in E that are incident on u. It can be shown from the relationship between the Frobenius norm and the singular

  10. Electronic transport coefficients from ab initio simulations and application to dense liquid hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holst, Bastian; French, Martin; Redmer, Ronald

    2011-06-15

    Using Kubo's linear response theory, we derive expressions for the frequency-dependent electrical conductivity (Kubo-Greenwood formula), thermopower, and thermal conductivity in a strongly correlated electron system. These are evaluated within ab initio molecular dynamics simulations in order to study the thermoelectric transport coefficients in dense liquid hydrogen, especially near the nonmetal-to-metal transition region. We also observe significant deviations from the widely used Wiedemann-Franz law, which is strictly valid only for degenerate systems, and give an estimate for its valid scope of application toward lower densities.

  11. Plunge waveforms from inspiralling binary black holes.

    PubMed

    Baker, J; Brügmann, B; Campanelli, M; Lousto, C O; Takahashi, R

    2001-09-17

    We study the coalescence of nonspinning binary black holes from near the innermost stable circular orbit down to the final single rotating black hole. We use a technique that combines the full numerical approach to solve the Einstein equations, applied in the truly nonlinear regime, and linearized perturbation theory around the final distorted single black hole at later times. We compute the plunge waveforms, which present a non-negligible signal lasting for t approximately 100M showing early nonlinear ringing, and we obtain estimates for the total gravitational energy and angular momentum radiated.

  12. On Compressible Vortex Sheets

    NASA Astrophysics Data System (ADS)

    Secchi, Paolo

    2005-05-01

    We introduce the main known results of the theory of incompressible and compressible vortex sheets. Moreover, we present recent results obtained by the author with J. F. Coulombel about supersonic compressible vortex sheets in two space dimensions. The problem is a nonlinear free boundary hyperbolic problem with two difficulties: the free boundary is characteristic and the Lopatinski condition holds only in a weak sense, yielding losses of derivatives. Under a supersonic condition that precludes violent instabilities, we prove an energy estimate for the boundary value problem obtained by linearization around an unsteady piecewise solution.

  13. On the existence of mosaic-skeleton approximations for discrete analogues of integral operators

    NASA Astrophysics Data System (ADS)

    Kashirin, A. A.; Taltykina, M. Yu.

    2017-09-01

    Exterior three-dimensional Dirichlet problems for the Laplace and Helmholtz equations are considered. By applying methods of potential theory, they are reduced to equivalent Fredholm boundary integral equations of the first kind, for which discrete analogues, i.e., systems of linear algebraic equations (SLAEs) are constructed. The existence of mosaic-skeleton approximations for the matrices of the indicated systems is proved. These approximations make it possible to reduce the computational complexity of an iterative solution of the SLAEs. Numerical experiments estimating the capabilities of the proposed approach are described.

  14. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    NASA Astrophysics Data System (ADS)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  15. A study of the limitations of linear theory methods as applied to sonic boom calculations

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.

    1990-01-01

    Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.

  16. Inference with minimal Gibbs free energy in information field theory.

    PubMed

    Ensslin, Torsten A; Weig, Cornelius

    2010-11-01

    Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.

  17. Quantum Theory of Superresolution for Incoherent Optical Imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.

  18. Deformation of a helical filament by flow and electric or magnetic fields

    NASA Astrophysics Data System (ADS)

    Kim, Munju; Powers, Thomas R.

    2005-02-01

    Motivated by recent advances in the real-time imaging of fluorescent flagellar filaments in living bacteria [Turner, Ryu, and Berg, J. Bacteriol. 82, 2793 (2000)], we compute the deformation of a helical elastic filament due to flow and external magnetic or high-frequency electric fields. Two cases of deformation due to hydrodynamic drag are considered: the compression of a filament rotated by a stationary motor and the extension of a stationary filament due to flow along the helical axis. We use Kirchhoff rod theory for the filament, and work to linear order in the deflection. Hydrodynamic forces are described first by resistive-force theory, and then for comparison by the more accurate slender-body theory. For helices with a short pitch, the deflection in axial flow predicted by slender-body theory is significantly smaller than that computed with resistive-force theory. Therefore, our estimate of the bending stiffness of a flagellar filament is smaller than that of previous workers. In our calculation of the deformation of a polarizable helix in an external field, we show that the problem is equivalent to the classical case of a helix deformed by forces applied only at the ends.

  19. A general theory of intertemporal decision-making and the perception of time

    PubMed Central

    Namboodiri, Vijay M. K.; Mihalas, Stefan; Marton, Tanya M.; Hussain Shuler, Marshall G.

    2014-01-01

    Animals and humans make decisions based on their expected outcomes. Since relevant outcomes are often delayed, perceiving delays and choosing between earlier vs. later rewards (intertemporal decision-making) is an essential component of animal behavior. The myriad observations made in experiments studying intertemporal decision-making and time perception have not yet been rationalized within a single theory. Here we present a theory—Training-Integrated Maximized Estimation of Reinforcement Rate (TIMERR)—that explains a wide variety of behavioral observations made in intertemporal decision-making and the perception of time. Our theory postulates that animals make intertemporal choices to optimize expected reward rates over a limited temporal window which includes a past integration interval—over which experienced reward rate is estimated—as well as the expected delay to future reward. Using this theory, we derive mathematical expressions for both the subjective value of a delayed reward and the subjective representation of the delay. A unique contribution of our work is in finding that the past integration interval directly determines the steepness of temporal discounting and the non-linearity of time perception. In so doing, our theory provides a single framework to understand both intertemporal decision-making and time perception. PMID:24616677

  20. Einstein’s quadrupole formula from the kinetic-conformal Hořava theory

    NASA Astrophysics Data System (ADS)

    Bellorín, Jorge; Restuccia, Alvaro

    We analyze the radiative and nonradiative linearized variables in a gravity theory within the family of the nonprojectable Hořava theories, the Hořava theory at the kinetic-conformal point. There is no extra mode in this formulation, the theory shares the same number of degrees of freedom with general relativity. The large-distance effective action, which is the one we consider, can be given in a generally-covariant form under asymptotically flat boundary conditions, the Einstein-aether theory under the condition of hypersurface orthogonality on the aether vector. In the linearized theory, we find that only the transverse-traceless tensorial modes obey a sourced wave equation, as in general relativity. The rest of variables are nonradiative. The result is gauge-independent at the level of the linearized theory. For the case of a weak source, we find that the leading mode in the far zone is exactly Einstein’s quadrupole formula of general relativity, if some coupling constants are properly identified. There are no monopoles nor dipoles in this formulation, in distinction to the nonprojectable Horava theory outside the kinetic-conformal point. We also discuss some constraints on the theory arising from the observational bounds on Lorentz-violating theories.

  1. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  2. Scalar-tensor linear inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Artymowski, Michał; Racioppi, Antonio, E-mail: Michal.Artymowski@uj.edu.pl, E-mail: Antonio.Racioppi@kbfi.ee

    2017-04-01

    We investigate two approaches to non-minimally coupled gravity theories which present linear inflation as attractor solution: a) the scalar-tensor theory approach, where we look for a scalar-tensor theory that would restore results of linear inflation in the strong coupling limit for a non-minimal coupling to gravity of the form of f (φ) R /2; b) the particle physics approach, where we motivate the form of the Jordan frame potential by loop corrections to the inflaton field. In both cases the Jordan frame potentials are modifications of the induced gravity inflationary scenario, but instead of the Starobinsky attractor they lead tomore » linear inflation in the strong coupling limit.« less

  3. CO2 flux determination by closed-chamber methods can be seriously biased by inappropriate application of linear regression

    NASA Astrophysics Data System (ADS)

    Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.

    2007-11-01

    Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach has been justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatlands sites in Finland and a tundra site in Siberia. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. However, a rather large percentage of the exponential regression functions showed curvatures not consistent with the theoretical model which is considered to be caused by violations of the underlying model assumptions. Especially the effects of turbulence and pressure disturbances by the chamber deployment are suspected to have caused unexplainable curvatures. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes. The degree of underestimation increased with increasing CO2 flux strength and was dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.

  4. An outflow boundary condition for aeroacoustic computations

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Hagstrom, Thomas

    1995-01-01

    A formulation of boundary condition for flows with small disturbances is presented. The authors test their methodology in an axisymmetric jet flow calculation, using both the Navier-Stokes and Euler equations. Solutions in the far field are assumed to be oscillatory. If the oscillatory disturbances are small, the growth of the solution variables can be predicted by linear theory. Eigenfunctions of the linear theory are used explicitly in the formulation of the boundary conditions. This guarantees correct solutions at the boundary in the limit where the predictions of linear theory are valid.

  5. Dynamic Characteristics of Regional Flows around the Pyrénées in View of the PYREX Experiment. Part I: Analysis of the Pressure and Wind Fields and Experimental Assessment of the Applicability of the Linear Theory.

    NASA Astrophysics Data System (ADS)

    Bénech, B.; Koffi, E.; Druilhet, A.; Durand, P.; Bessemoulin, P.; Campins, J.; Jansa, A.; Terliuc, B.

    1998-01-01

    regarding (a) the perturbation of the surface pressure field, which resembles the predicted bipolar distribution; (b) the dependence of the drag on Fr1, which enables the assessment of the linear theory and the definition of the conditions of applicability of two models [(i) a two-dimensional model, for which it was possible to define quantitatively the effective blocked area, and (ii) a three-dimensional model, for which a scaling function that combines the direction of incidence, the mountain shape, and the Coriolis effect was found almost constant, with an average value of 0.2 for all the cases under study]; (c) the extension of the area affected by the blocking effect, estimated to be 4.5-5 times the width of the barrier and the drift of the strong deceleration point due to the Coriolis effect; (d) the dependence of the wind velocities on Fr1 at the edges of the barrier; and (e) the asymmetric flow deviation induced by the Coriolis effect and biased by the departure of the flow from normal incidence.

  6. A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1989-01-01

    Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.

  7. Estimation of ion competition via correlated responsivity offset in linear ion trap mass spectrometry analysis: theory and practical use in the analysis of cyanobacterial hepatotoxin microcystin-LR in extracts of food additives.

    PubMed

    Urban, Jan; Hrouzek, Pavel; Stys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations.

  8. Estimation of Ion Competition via Correlated Responsivity Offset in Linear Ion Trap Mass Spectrometry Analysis: Theory and Practical Use in the Analysis of Cyanobacterial Hepatotoxin Microcystin-LR in Extracts of Food Additives

    PubMed Central

    Hrouzek, Pavel; Štys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations. PMID:23586036

  9. ORACLS: A system for linear-quadratic-Gaussian control law design

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1978-01-01

    A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.

  10. Decoherence estimation in quantum theory and beyond

    NASA Astrophysics Data System (ADS)

    Pfister, Corsin

    The quantum physics literature provides many different characterizations of decoherence. Most of them have in common that they describe decoherence as a kind of influence on a quantum system upon interacting with an another system. In the spirit of quantum information theory, we adapt a particular viewpoint on decoherence which describes it as the loss of information into a system that is possibly controlled by an adversary. We use a quantitative framework for decoherence that builds on operational characterizations of the min-entropy that have been developed in the quantum information literature. It characterizes decoherence as an influence on quantum channels that reduces their suitability for a variety of quantifiable tasks such as the distribution of secret cryptographic keys of a certain length or the distribution of a certain number of maximally entangled qubit pairs. This allows for a quantitative and operational characterization of decoherence via operational characterizations of the min-entropy. In this thesis, we present a series of results about the estimation of the minentropy, subdivided into three parts. The first part concerns the estimation of a quantum adversary's uncertainty about classical information--expressed by the smooth min-entropy--as it is done in protocols for quantum key distribution (QKD). We analyze this form of min-entropy estimation in detail and find that some of the more recently suggested QKD protocols have previously unnoticed security loopholes. We show that the specifics of the sifting subroutine of a QKD protocol are crucial for security by pointing out mistakes in the security analysis in the literature and by presenting eavesdropping attacks on those problematic protocols. We provide solutions to the identified problems and present a formalized analysis of the min-entropy estimate that incorporates the sifting stage of QKD protocols. In the second part, we extend ideas from QKD to a protocol that allows to estimate an adversary's uncertainty about quantum information, expressed by the fully quantum smooth min-entropy. Roughly speaking, we show that a protocol that resembles the parallel execution of two QKD protocols can be used to lower bound the min-entropy of some unmeasured qubits. We explain how this result may influence the ongoing search for protocols for entanglement distribution. The third part is dedicated to the development of a framework that allows the estimation of decoherence even in experiments that cannot be correctly described by quantum theory. Inspired by an equivalent formulation of the min-entropy that relates it to the fidelity with a maximally entangled state, we define a decoherence quantity for a very general class of probabilistic theories that reduces to the min-entropy in the special case of quantum theory. This entails a definition of maximal entanglement for generalized probabilistic theories. Using techniques from semidefinite and linear programming, we show how bounds on this quantity can be estimated through Bell-type experiments. This allows to test models for decoherence that cannot be described by quantum theory. As an example application, we devise an experimental test of a model for gravitational decoherence that has been suggested in the literature.

  11. Frequency domain system identification of helicopter rotor dynamics incorporating models with time periodic coefficients

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan

    1997-08-01

    One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.

  12. H∞ filtering for discrete-time systems subject to stochastic missing measurements: a decomposition approach

    NASA Astrophysics Data System (ADS)

    Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang

    2014-07-01

    This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.

  13. A general theory of linear cosmological perturbations: bimetric theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagos, Macarena; Ferreira, Pedro G., E-mail: m.lagos13@imperial.ac.uk, E-mail: p.ferreira1@physics.ox.ac.uk

    2017-01-01

    We implement the method developed in [1] to construct the most general parametrised action for linear cosmological perturbations of bimetric theories of gravity. Specifically, we consider perturbations around a homogeneous and isotropic background, and identify the complete form of the action invariant under diffeomorphism transformations, as well as the number of free parameters characterising this cosmological class of theories. We discuss, in detail, the case without derivative interactions, and compare our results with those found in massive bigravity.

  14. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  15. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  16. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  17. Analysis of tracer transit in rat brain after carotid artery and femoral vein administrations using linear system theory.

    PubMed

    Rudin, M; Beckmann, N; Sauter, A

    1997-01-01

    Determination of tissue perfusion rates by MRI bolus tracking methods relies on the central volume principle which states that tissue blood flow is given by the tissue blood volume divided by the mean tracer transit time (MTT). Accurate determination of the MTT requires knowledge of the arterial input function which in MRI experiments is usually not known, especially when using small animals. The problem of unknown arterial input can be circumvented in animal experiments by directly injecting the contrast agent into a feeding artery of the tissue of interest. In the present article the passage of magnetite nanoparticles through the rat cerebral cortex is analyzed after injection into the internal carotid artery. The results are discussed in the framework of linear system theory using a one-compartment model for brain tissue and by using the well characterized gamma-variate function to describe the tissue concentration profile of the contrast agent. The results obtained from the intra-arterial tracer administration experiments are then compared with the commonly used intra-venous injection of the contrast agent in order to estimate the contribution of the peripheral circulation to the MTT values in the latter case. The experiments were analyzed using a two-compartment model and the gamma-variate function. As an application perfusion rates in normal and ischemic cerebral cortex of hypertensive rats were estimated in a model of focal cerebral ischemia. The results indicate that peripheral circulation has a significant influence on the MTT values and thus on the perfusion rates, which cannot be neglected.

  18. Colombeau algebra as a mathematical tool for investigating step load and step deformation of systems of nonlinear springs and dashpots

    NASA Astrophysics Data System (ADS)

    Průša, Vít; Řehoř, Martin; Tůma, Karel

    2017-02-01

    The response of mechanical systems composed of springs and dashpots to a step input is of eminent interest in the applications. If the system is formed by linear elements, then its response is governed by a system of linear ordinary differential equations. In the linear case, the mathematical method of choice for the analysis of the response is the classical theory of distributions. However, if the system contains nonlinear elements, then the classical theory of distributions is of no use, since it is strictly limited to the linear setting. Consequently, a question arises whether it is even possible or reasonable to study the response of nonlinear systems to step inputs. The answer is positive. A mathematical theory that can handle the challenge is the so-called Colombeau algebra. Building on the abstract result by Průša and Rajagopal (Int J Non-Linear Mech 81:207-221, 2016), we show how to use the theory in the analysis of response of nonlinear spring-dashpot and spring-dashpot-mass systems.

  19. Trends in modern system theory

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1976-01-01

    The topics considered are related to linear control system design, adaptive control, failure detection, control under failure, system reliability, and large-scale systems and decentralized control. It is pointed out that the design of a linear feedback control system which regulates a process about a desirable set point or steady-state condition in the presence of disturbances is a very important problem. The linearized dynamics of the process are used for design purposes. The typical linear-quadratic design involving the solution of the optimal control problem of a linear time-invariant system with respect to a quadratic performance criterion is considered along with gain reduction theorems and the multivariable phase margin theorem. The stumbling block in many adaptive design methodologies is associated with the amount of real time computation which is necessary. Attention is also given to the desperate need to develop good theories for large-scale systems, the beginning of a microprocessor revolution, the translation of the Wiener-Hopf theory into the time domain, and advances made in dynamic team theory, dynamic stochastic games, and finite memory stochastic control.

  20. Fatigue life estimation program for Part 23 airplanes, `AFS.FOR`

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaul, S.K.

    1993-12-31

    The purpose of this paper is to introduce to the general aviation industry a computer program which estimates the safe fatigue life of any Federal Aviation Regulation (FAR) Part 23 airplane. The algorithm uses the methodology (Miner`s Linear Cumulative Damage Theory) and the various data presented in the Federal Aviation Administration (FAA) Report No. AFS-120-73-2, dated May 1973. The program is written in FORTRAN 77 language and is executable on a desk top personal computer. The program prompts the user for the input data needed and provides a variety of options for its intended use. The program is envisaged tomore » be released through issuance of a FAA report, which will contain the appropriate comments, instructions, warnings and limitations.« less

  1. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  2. Integrated detection, estimation, and guidance in pursuit of a maneuvering target

    NASA Astrophysics Data System (ADS)

    Dionne, Dany

    The thesis focuses on efficient solutions of non-cooperative pursuit-evasion games with imperfect information on the state of the system. This problem is important in the context of interception of future maneuverable ballistic missiles. However, the theoretical developments are expected to find application to a broad class of hybrid control and estimation problems in industry. The validity of the results is nevertheless confirmed using a benchmark problem in the area of terminal guidance. A specific interception scenario between an incoming target with no information and a single interceptor missile with noisy measurements is analyzed in the form of a linear hybrid system subject to additive abrupt changes. The general research is aimed to achieve improved homing accuracy by integrating ideas from detection theory, state estimation theory and guidance. The results achieved can be summarized as follows. (i) Two novel maneuver detectors are developed to diagnose abrupt changes in a class of hybrid systems (detection and isolation of evasive maneuvers): a new implementation of the GLR detector and the novel adaptive- H0 GLR detector. (ii) Two novel state estimators for target tracking are derived using the novel maneuver detectors. The state estimators employ parameterized family of functions to described possible evasive maneuvers. (iii) A novel adaptive Bayesian multiple model predictor of the ballistic miss is developed which employs semi-Markov models and ideas from detection theory. (iv) A novel integrated estimation and guidance scheme that significantly improves the homing accuracy is also presented. The integrated scheme employs banks of estimators and guidance laws, a maneuver detector, and an on-line governor; the scheme is adaptive with respect to the uncertainty affecting the probability density function of the filtered state. (v) A novel discretization technique for the family of continuous-time, game theoretic, bang-bang guidance laws is introduced. The performance of the novel algorithms is assessed for the scenario of a pursuit-evasion engagement between a randomly maneuvering ballistic missile and an interceptor. Extensive Monte Carlo simulations are employed to evaluate the main statistical properties of the algorithms. (Abstract shortened by UMI.)

  3. Excitation and trapping of lower hybrid waves in striations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borisov, N.; Institute of Terrestrial Magnetism, Ionosphere and Radio Waves Propagation; Honary, F.

    2008-12-15

    The theory of lower hybrid (LH) waves trapped in striations in warm ionospheric plasma in the three-dimensional case is presented. A specific mechanism of trapping associated with the linear transformation of waves is discussed. It is shown analytically that such trapping can take place in elongated plasma depletions with the frequencies below and above the lower hybrid resonance frequency of the ambient plasma. The theory is applied mainly to striations generated artificially in ionospheric modification experiments and partly to natural plasma depletions in the auroral upper ionosphere. Typical amplitudes and transverse scales of the trapped LH waves excited in ionosphericmore » modification experiments are estimated. It is shown that such waves possibly can be detected by backscattering at oblique sounding in very high frequency (VHF) and ultra high frequency (UHF) ranges.« less

  4. A statistical rain attenuation prediction model with application to the advanced communication technology satellite project. 3: A stochastic rain fade control algorithm for satellite link power via non linear Markow filtering theory

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1991-01-01

    The dynamic and composite nature of propagation impairments that are incurred on Earth-space communications links at frequencies in and above 30/20 GHz Ka band, i.e., rain attenuation, cloud and/or clear air scintillation, etc., combined with the need to counter such degradations after the small link margins have been exceeded, necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) Project by the implementation of optimal processing schemes derived through the use of the Rain Attenuation Prediction Model and nonlinear Markov filtering theory.

  5. Nonlinear stability of solar type 3 radio bursts. 1: Theory

    NASA Technical Reports Server (NTRS)

    Smith, R. A.; Goldstein, M. L.; Papadopoulos, K.

    1978-01-01

    A theory of the excitation of solar type 3 bursts is presented. Electrons initially unstable to the linear bump-in-tail instability are shown to rapidly amplify Langmuir waves to energy densities characteristic of strong turbulence. The three-dimensional equations which describe the strong coupling (wave-wave) interactions are derived. For parameters characteristic of the interplanetary medium the equations reduce to one dimension. In this case, the oscillating two stream instability (OTSI) is the dominant nonlinear instability, and is stablized through the production of nonlinear ion density fluctuations that efficiently scatter Langmuir waves out of resonance with the electron beam. An analytical model of the electron distribution function is also developed which is used to estimate the total energy losses suffered by the electron beam as it propagates from the solar corona to 1 A.U. and beyond.

  6. The role of modern control theory in the design of controls for aircraft turbine engines

    NASA Technical Reports Server (NTRS)

    Zeller, J.; Lehtinen, B.; Merrill, W.

    1982-01-01

    The development, applications, and current research in modern control theory (MCT) are reviewed, noting the importance for fuel-efficient operation of turbines with variable inlet guide vanes, compressor stators, and exhaust nozzle area. The evolution of multivariable propulsion control design is examined, noting a basis in a matrix formulation of the differential equations defining the process, leading to state space formulations. Reports and papers which appeared from 1970-1982 which dealt with problems in MCT applications to turbine engine control design are outlined, including works on linear quadratic regulator methods, frequency domain methods, identification, estimation, and model reduction, detection, isolation, and accommodation, and state space control, adaptive control, and optimization approaches. Finally, NASA programs in frequency domain design, sensor failure detection, computer-aided control design, and plant modeling are explored

  7. Linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2011-06-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  8. A generalized Lyapunov theory for robust root clustering of linear state space models with real parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1992-01-01

    The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.

  9. Gravitons as Embroidery on the Weave

    NASA Astrophysics Data System (ADS)

    Iwasaki, Junichi; Rovelli, Carlo

    We investigate the physical interpretation of the loop states that appear in the loop representation of quantum gravity. By utilizing the “weave” state, which has been recently introduced as a quantum description of the microstructure of flat space, we analyze the relation between loop states and graviton states. This relation determines a linear map M from the state-space of the nonperturbative theory (loop space) into the state-space of the linearized theory (Fock space). We present an explicit form of this map, and a preliminary investigation of its properties. The existence of such a map indicates that the full nonperturbative quantum theory includes a sector that describes the same physics as (the low energy regimes of) the linearized theory, namely gravitons on flat space.

  10. Weighted linear least squares estimation of diffusion MRI parameters: strengths, limitations, and pitfalls.

    PubMed

    Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben

    2013-11-01

    Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Variational Theory of Motion of Curved, Twisted and Extensible Elastic Rods

    DTIC Science & Technology

    1993-01-18

    nonlinear theory such as questions of existence of solutions and global behavior have been carried out by Antman (1976). His basic work entitled "The...Aerosp. Ens. Q017/018 16 REFERENCES Antman , S.S., "Ordinary Differential Equations of Non-Linear ElastIcity 1: Foundatious of the Theories of Non-Linearly...Elutic rods and Shells," A.R.M.A. 61 (1976), 307-351. Antman , S.S., "The Theory of Rods", Handbuch der Physik, Vol. Vla/2, Springer-Verlq, Berlin

  12. Highly Accurate Quartic Force Fields, Vibrational Frequencies, and Spectroscopic Constants for Cyclic and Linear C3H3(+)

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Taylor, Peter R.; Lee, Timothy J.

    2011-01-01

    High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C H + molecular cation, referred to as c-C H + and I-C H +. Specifically the 33 3333 singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants using both vibrational 2nd-order perturbation theory and variational methods to solve the nuclear Schroedinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C H +, 33 obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C H + 33 and I-C H + are the most reliable available for the free gas-phase species and it is hoped that 33 these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations.

  13. Structural and vibrational characteristics of a non-linear optical material 3-(4-nitrophenyl)-1-(pyridine-3-yl) prop-2-en-1-one probed by quantum chemical computation and spectroscopic techniques

    NASA Astrophysics Data System (ADS)

    Kumar, Ram; Karthick, T.; Tandon, Poonam; Agarwal, Parag; Menezes, Anthoni Praveen; Jayarama, A.

    2018-07-01

    Chalcone and its derivatives are well-known for their high non-linear optical behavior and charge transfer characteristics. The effectiveness of charge transfer via ethylenic group and increase in NLO response of the chalcone upon substitutions are of great interest. The present study focuses the structural, charge transfer and non-linear optical properties of a new chalcone derivative "3-(4-nitrophenyl)-1-(pyridine-3-yl) prop-2-en-1-one" (hereafter abbreviated as 4 NP3AP). To accomplish this task, we have incorporated the experimental FT-IR, FT-Raman and UV-vis spectroscopic studies along with quantum chemical calculations. The frequency assignments of peaks in IR and Raman have been done on the basis of potential energy distribution and the results were compared with the earlier reports on similar kind of molecules. For obtaining the electronic transition details of 4 NP3AP, UV-vis spectrum has been simulated by considering both gaseous and solvent phase using time-dependent density functional theory (TD-DFT). The HOMO-LUMO energy gap, most important factor to be considered for studying charge transfer properties of the molecule has been calculated. The electron density surface map corresponding to the net electrostatic point charges has been generated to obtain the electrophilic and nucleophilic sites. The charge transfer originating from the occupied (donor) and unoccupied (acceptor) molecular orbitals have been analyzed with the help of natural bond orbital theory. Moreover, the estimation of second-hyperpolarizability of the molecule confirms the non-linear optical behavior of the molecule.

  14. A non-axisymmetric linearized supersonic wave drag analysis: Mathematical theory

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1996-01-01

    A Mathematical theory is developed to perform the calculations necessary to determine the wave drag for slender bodies of non-circular cross section. The derivations presented in this report are based on extensions to supersonic linearized small perturbation theory. A numerical scheme is presented utilizing Fourier decomposition to compute the pressure coefficient on and about a slender body of arbitrary cross section.

  15. On the Validity of the Streaming Model for the Redshift-Space Correlation Function in the Linear Regime

    NASA Astrophysics Data System (ADS)

    Fisher, Karl B.

    1995-08-01

    The relation between the galaxy correlation functions in real-space and redshift-space is derived in the linear regime by an appropriate averaging of the joint probability distribution of density and velocity. The derivation recovers the familiar linear theory result on large scales but has the advantage of clearly revealing the dependence of the redshift distortions on the underlying peculiar velocity field; streaming motions give rise to distortions of θ(Ω0.6/b) while variations in the anisotropic velocity dispersion yield terms of order θ(Ω1.2/b2). This probabilistic derivation of the redshift-space correlation function is similar in spirit to the derivation of the commonly used "streaming" model, in which the distortions are given by a convolution of the real-space correlation function with a velocity distribution function. The streaming model is often used to model the redshift-space correlation function on small, highly nonlinear, scales. There have been claims in the literature, however, that the streaming model is not valid in the linear regime. Our analysis confirms this claim, but we show that the streaming model can be made consistent with linear theory provided that the model for the streaming has the functional form predicted by linear theory and that the velocity distribution is chosen to be a Gaussian with the correct linear theory dispersion.

  16. Index of NACA Technical Publications, 1915 - 1949

    DTIC Science & Technology

    1950-03-31

    in Linearized Supersonic Swanson, Robert S. and Gillis, Clarence Wing Theory. TN 1767, April 1949. L.: ’Vind-Tunnel Calibration and Cor- rection...Symmetrical Joukowski Profiles.Heaslet, Max, A.; Lomax, Harvard and Rept. 621, 1938. Spreiter, John R.: Linearized Com- pressible-Flow Theory for Sonic Flight...Rept. 624, 1938. TheApplication of Green’s Theoremto the Solution of Boundary-Value Stack, John; Lindsey, W. F. and-Littell, Problems in Linearized

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch

    We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less

  18. Boltzmann equation and hydrodynamics beyond Navier-Stokes.

    PubMed

    Bobylev, A V

    2018-04-28

    We consider in this paper the problem of derivation and regularization of higher (in Knudsen number) equations of hydrodynamics. The author's approach based on successive changes of hydrodynamic variables is presented in more detail for the Burnett level. The complete theory is briefly discussed for the linearized Boltzmann equation. It is shown that the best results in this case can be obtained by using the 'diagonal' equations of hydrodynamics. Rigorous estimates of accuracy of the Navier-Stokes and Burnett approximations are also presented.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  19. Recent developments in learning control and system identification for robots and structures

    NASA Technical Reports Server (NTRS)

    Phan, M.; Juang, J.-N.; Longman, R. W.

    1990-01-01

    This paper reviews recent results in learning control and learning system identification, with particular emphasis on discrete-time formulation, and their relation to adaptive theory. Related continuous-time results are also discussed. Among the topics presented are proportional, derivative, and integral learning controllers, time-domain formulation of discrete learning algorithms. Newly developed techniques are described including the concept of the repetition domain, and the repetition domain formulation of learning control by linear feedback, model reference learning control, indirect learning control with parameter estimation, as well as related basic concepts, recursive and non-recursive methods for learning identification.

  20. The application of signal detection theory to optics

    NASA Technical Reports Server (NTRS)

    Helstrom, C. W.

    1971-01-01

    The restoration of images focused on a photosensitive surface is treated from the standpoint of maximum likelihood estimation, taking into account the Poisson distributions of the observed data, which are the numbers of photoelectrons from various elements of the surface. A detector of an image focused on such a surface utilizes a certain linear combination of those numbers as the optimum detection statistic. Methods for calculating the false alarm and detection probabilities are proposed. It is shown that measuring noncommuting observables in an ideal quantum receiver cannot yield a lower Bayes cost than that attainable by a system measuring only commuting observables.

  1. Theory-Based Parameterization of Semiotics for Measuring Pre-literacy Development

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.

    2013-09-01

    A probabilistic model was applied to problem of measuring pre-literacy in young children. First, semiotic philosophy and contemporary cognition research were conceptually integrated to establish theoretical foundations for rating 14 characteristics of children's drawings and narratives (N = 120). Then ratings were transformed with a Rasch model, which estimated linear item parameter values that accounted for 79 percent of rater variance. Principle Components Analysis of item residual matrix confirmed variance remaining after item calibration was largely unsystematic. Validation analyses found positive correlations between semiotic measures and preschool literacy outcomes. Practical implications of a semiotics dimension for preschool practice were discussed.

  2. Chandrasekhar-type algorithms for fast recursive estimation in linear systems with constant parameters

    NASA Technical Reports Server (NTRS)

    Choudhury, A. K.; Djalali, M.

    1975-01-01

    In this recursive method proposed, the gain matrix for the Kalman filter and the convariance of the state vector are computed not via the Riccati equation, but from certain other equations. These differential equations are of Chandrasekhar-type. The 'invariant imbedding' idea resulted in the reduction of the basic boundary value problem of transport theory to an equivalent initial value system, a significant computational advance. Initial value experience showed that there is some computational savings in the method and the loss of positive definiteness of the covariance matrix is less vulnerable.

  3. Voluntary EMG-to-force estimation with a multi-scale physiological muscle model

    PubMed Central

    2013-01-01

    Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560

  4. Estimating linear effects in ANOVA designs: the easy way.

    PubMed

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  5. The spectral applications of Beer-Lambert law for some biological and dosimetric materials

    NASA Astrophysics Data System (ADS)

    Içelli, Orhan; Yalçin, Zeynel; Karakaya, Vatan; Ilgaz, Işıl P.

    2014-08-01

    The aim of this study is to conduct quantitative and qualitative analysis of biological and dosimetric materials which contain organic and inorganic materials and to make the determination by using the spectral theorem Beer-Lambert law. Beer-Lambert law is a system of linear equations for the spectral theory. It is possible to solve linear equations with a non-zero coefficient matrix determinant forming linear equations. Characteristic matrix of the linear equation with zero determinant is called point spectrum at the spectral theory.

  6. Parasitic chytrids sustain zooplankton growth during inedible algal bloom

    PubMed Central

    Rasconi, Serena; Grami, Boutheina; Niquil, Nathalie; Jobard, Marlène; Sime-Ngando, Télesphore

    2014-01-01

    This study assesses the quantitative impact of parasitic chytrids on the planktonic food web of two contrasting freshwater lakes during different algal bloom situations. Carbon-based food web models were used to investigate the effects of chytrids during the spring diatom bloom in Lake Pavin (oligo-mesotrophic) and the autumn cyanobacteria bloom in Lake Aydat (eutrophic). Linear inverse modeling was employed to estimate undetermined flows in both lakes. The Monte Carlo Markov chain linear inverse modeling procedure provided estimates of the ranges of model-derived fluxes. Model results confirm recent theories on the impact of parasites on food web function through grazers and recyclers. During blooms of “inedible” algae (unexploited by planktonic herbivores), the epidemic growth of chytrids channeled 19–20% of the primary production in both lakes through the production of grazer exploitable zoospores. The parasitic throughput represented 50% and 57% of the zooplankton diet, respectively, in the oligo-mesotrophic and in the eutrophic lakes. Parasites also affected ecological network properties such as longer carbon path lengths and loop strength, and contributed to increase the stability of the aquatic food web, notably in the oligo-mesotrophic Lake Pavin. PMID:24904543

  7. Sliding mode based trajectory linearization control for hypersonic reentry vehicle via extended disturbance observer.

    PubMed

    Xingling, Shao; Honglun, Wang

    2014-11-01

    This paper proposes a novel hybrid control framework by combing observer-based sliding mode control (SMC) with trajectory linearization control (TLC) for hypersonic reentry vehicle (HRV) attitude tracking problem. First, fewer control consumption is achieved using nonlinear tracking differentiator (TD) in the attitude loop. Second, a novel SMC that employs extended disturbance observer (EDO) to counteract the effect of uncertainties using a new sliding surface which includes the estimation error is integrated to address the tracking error stabilization issues in the attitude and angular rate loop, respectively. In addition, new results associated with EDO are examined in terms of dynamic response and noise-tolerant performance, as well as estimation accuracy. The key feature of the proposed compound control approach is that chattering free tracking performance with high accuracy can be ensured for HRV in the presence of multiple uncertainties under control constraints. Based on finite time convergence stability theory, the stability of the resulting closed-loop system is well established. Also, comparisons and extensive simulation results are presented to demonstrate the effectiveness of the control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Waves and instabilities in plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L.

    1987-01-01

    The contents of this book are: Plasma as a Dielectric Medium; Nyquist Technique; Absolute and Convective Instabilities; Landau Damping and Phase Mixing; Particle Trapping and Breakdown of Linear Theory; Solution of Viasov Equation via Guilding-Center Transformation; Kinetic Theory of Magnetohydrodynamic Waves; Geometric Optics; Wave-Kinetic Equation; Cutoff and Resonance; Resonant Absorption; Mode Conversion; Gyrokinetic Equation; Drift Waves; Quasi-Linear Theory; Ponderomotive Force; Parametric Instabilities; Problem Sets for Homework, Midterm and Final Examinations.

  9. The Importance of Why: An Intelligence Approach for a Multi-Polar World

    DTIC Science & Technology

    2016-04-04

    December 27, 2015). 12. 2 Jupiter Scientific, “Definitions of Important Terms in Chaos Theory ,” Jupiter Scientific website, http...Important Terms in Chaos Theory .” Linearizing a system is approximating a nonlinear system through the application of linear system model. 25...Complexity Theory to Anticipate Strategic Surprise,” 24. 16 M. Mitchell Waldrop, Complexity: The Emerging Science at the Edge of Order and Chaos (New

  10. Robust global identifiability theory using potentials--Application to compartmental models.

    PubMed

    Wongvanich, N; Hann, C E; Sirisena, H R

    2015-04-01

    This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Numerical Test of Analytical Theories for Perpendicular Diffusion in Small Kubo Number Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heusen, M.; Shalchi, A., E-mail: husseinm@myumanitoba.ca, E-mail: andreasm4@yahoo.com

    In the literature, one can find various analytical theories for perpendicular diffusion of energetic particles interacting with magnetic turbulence. Besides quasi-linear theory, there are different versions of the nonlinear guiding center (NLGC) theory and the unified nonlinear transport (UNLT) theory. For turbulence with high Kubo numbers, such as two-dimensional turbulence or noisy reduced magnetohydrodynamic turbulence, the aforementioned nonlinear theories provide similar results. For slab and small Kubo number turbulence, however, this is not the case. In the current paper, we compare different linear and nonlinear theories with each other and test-particle simulations for a noisy slab model corresponding to smallmore » Kubo number turbulence. We show that UNLT theory agrees very well with all performed test-particle simulations. In the limit of long parallel mean free paths, the perpendicular mean free path approaches asymptotically the quasi-linear limit as predicted by the UNLT theory. For short parallel mean free paths we find a Rechester and Rosenbluth type of scaling as predicted by UNLT theory as well. The original NLGC theory disagrees with all performed simulations regardless what the parallel mean free path is. The random ballistic interpretation of the NLGC theory agrees much better with the simulations, but compared to UNLT theory the agreement is inferior. We conclude that for this type of small Kubo number turbulence, only the latter theory allows for an accurate description of perpendicular diffusion.« less

  12. Is the time-dependent behaviour of the aortic valve intrinsically quasi-linear?

    NASA Astrophysics Data System (ADS)

    Anssari-Benam, Afshin

    2014-05-01

    The widely popular quasi-linear viscoelasticity (QLV) theory has been employed extensively in the literature for characterising the time-dependent behaviour of many biological tissues, including the aortic valve (AV). However, in contrast to other tissues, application of QLV to AV data has been met with varying success, with studies reporting discrepancies in the values of the associated quantified parameters for data collected from different timescales in experiments. Furthermore, some studies investigating the stress-relaxation phenomenon in valvular tissues have suggested discrete relaxation spectra, as an alternative to the continuous spectrum proposed by the QLV. These indications put forward a more fundamental question: Is the time-dependent behaviour of the aortic valve intrinsically quasi-linear? In other words, can the inherent characteristics of the tissue that govern its biomechanical behaviour facilitate a quasi-linear time-dependent behaviour? This paper attempts to address these questions by presenting a mathematical analysis to derive the expressions for the stress-relaxation G( t) and creep J( t) functions for the AV tissue within the QLV theory. The principal inherent characteristic of the tissue is incorporated into the QLV formulation in the form of the well-established gradual fibre recruitment model, and the corresponding expressions for G( t) and J( t) are derived. The outcomes indicate that the resulting stress-relaxation and creep functions do not appear to voluntarily follow the observed experimental trends reported in previous studies. These results highlight that the time-dependent behaviour of the AV may not be quasi-linear, and more suitable theoretical criteria and models may be required to explain the phenomenon based on tissue's microstructure, and for more accurate estimation of the associated material parameters. In general, these results may further be applicable to other planar soft tissues of the same class, i.e. with the same representation for fibre recruitment mechanism and discrete time-dependent spectra.

  13. A fresh look at linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  14. A Refined Zigzag Beam Theory for Composite and Sandwich Beams

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Sciuva, Marco Di; Gherlone, Marco

    2009-01-01

    A new refined theory for laminated composite and sandwich beams that contains the kinematics of the Timoshenko Beam Theory as a proper baseline subset is presented. This variationally consistent theory is derived from the virtual work principle and employs a novel piecewise linear zigzag function that provides a more realistic representation of the deformation states of transverse-shear flexible beams than other similar theories. This new zigzag function is unique in that it vanishes at the top and bottom bounding surfaces of a beam. The formulation does not enforce continuity of the transverse shear stress across the beam s cross-section, yet is robust. Two major shortcomings that are inherent in the previous zigzag theories, shear-force inconsistency and difficulties in simulating clamped boundary conditions, and that have greatly limited the utility of these previous theories are discussed in detail. An approach that has successfully resolved these shortcomings is presented herein. Exact solutions for simply supported and cantilevered beams subjected to static loads are derived and the improved modelling capability of the new zigzag beam theory is demonstrated. In particular, extensive results for thick beams with highly heterogeneous material lay-ups are discussed and compared with corresponding results obtained from elasticity solutions, two other zigzag theories, and high-fidelity finite element analyses. Comparisons with the baseline Timoshenko Beam Theory are also presented. The comparisons clearly show the improved accuracy of the new, refined zigzag theory presented herein over similar existing theories. This new theory can be readily extended to plate and shell structures, and should be useful for obtaining relatively low-cost, accurate estimates of structural response needed to design an important class of high-performance aerospace structures.

  15. A comparative numerical analysis of linear and nonlinear aerodynamic sound generation by vortex disturbances in homentropic constant shear flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hau, Jan-Niklas, E-mail: hau@fdy.tu-darmstadt.de; Oberlack, Martin; GSC CE, Technische Universität Darmstadt, Dolivostraße 15, 64293 Darmstadt

    2015-12-15

    Aerodynamic sound generation in shear flows is investigated in the light of the breakthrough in hydrodynamics stability theory in the 1990s, where generic phenomena of non-normal shear flow systems were understood. By applying the thereby emerged short-time/non-modal approach, the sole linear mechanism of wave generation by vortices in shear flows was captured [G. D. Chagelishvili, A. Tevzadze, G. Bodo, and S. S. Moiseev, “Linear mechanism of wave emergence from vortices in smooth shear flows,” Phys. Rev. Lett. 79, 3178-3181 (1997); B. F. Farrell and P. J. Ioannou, “Transient and asymptotic growth of two-dimensional perturbations in viscous compressible shear flow,” Phys.more » Fluids 12, 3021-3028 (2000); N. A. Bakas, “Mechanism underlying transient growth of planar perturbations in unbounded compressible shear flow,” J. Fluid Mech. 639, 479-507 (2009); and G. Favraud and V. Pagneux, “Superadiabatic evolution of acoustic and vorticity perturbations in Couette flow,” Phys. Rev. E 89, 033012 (2014)]. Its source is the non-normality induced linear mode-coupling, which becomes efficient at moderate Mach numbers that is defined for each perturbation harmonic as the ratio of the shear rate to its characteristic frequency. Based on the results by the non-modal approach, we investigate a two-dimensional homentropic constant shear flow and focus on the dynamical characteristics in the wavenumber plane. This allows to separate from each other the participants of the dynamical processes — vortex and wave modes — and to estimate the efficacy of the process of linear wave-generation. This process is analyzed and visualized on the example of a packet of vortex modes, localized in both, spectral and physical, planes. Further, by employing direct numerical simulations, the wave generation by chaotically distributed vortex modes is analyzed and the involved linear and nonlinear processes are identified. The generated acoustic field is anisotropic in the wavenumber plane, which results in highly directional linear sound radiation, whereas the nonlinearly generated waves are almost omni-directional. As part of this analysis, we compare the effectiveness of the linear and nonlinear mechanisms of wave generation within the range of validity of the rapid distortion theory and show the dominance of the linear aerodynamic sound generation. Finally, topological differences between the linear source term of the acoustic analogy equation and of the anisotropic non-normality induced linear mechanism of wave generation are found.« less

  16. The convergence rate of approximate solutions for nonlinear scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Nessyahu, Haim; Tadmor, Eitan

    1991-01-01

    The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.

  17. Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li

    2018-02-01

    Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.

  18. Structural study of gold clusters.

    PubMed

    Xiao, Li; Tollberg, Bethany; Hu, Xiankui; Wang, Lichang

    2006-03-21

    Density functional theory (DFT) calculations were carried out to study gold clusters of up to 55 atoms. Between the linear and zigzag monoatomic Au nanowires, the zigzag nanowires were found to be more stable. Furthermore, the linear Au nanowires of up to 2 nm are formed by slightly stretched Au dimers. These suggest that a substantial Peierls distortion exists in those structures. Planar geometries of Au clusters were found to be the global minima till the cluster size of 13. A quantitative correlation is provided between various properties of Au clusters and the structure and size. The relative stability of selected clusters was also estimated by the Sutton-Chen potential, and the result disagrees with that obtained from the DFT calculations. This suggests that a modification of the Sutton-Chen potential has to be made, such as obtaining new parameters, in order to use it to search the global minima for bigger Au clusters.

  19. The Growing Importance of Linear Algebra in Undergraduate Mathematics.

    ERIC Educational Resources Information Center

    Tucker, Alan

    1993-01-01

    Discusses the theoretical and practical importance of linear algebra. Presents a brief history of linear algebra and matrix theory and describes the place of linear algebra in the undergraduate curriculum. (MDH)

  20. Experimental cosmic statistics - I. Variance

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Szapudi, István; Jenkins, Adrian; Colberg, Jörg

    2000-04-01

    Counts-in-cells are measured in the τCDM Virgo Hubble Volume simulation. This large N-body experiment has 109 particles in a cubic box of size 2000h-1Mpc. The unprecedented combination of size and resolution allows, for the first time, a realistic numerical analysis of the cosmic errors and cosmic correlations of statistics related to counts-in-cells measurements, such as the probability distribution function PN itself, its factorial moments Fk and the related cumulants ψ and SNs. These statistics are extracted from the whole simulation cube, as well as from 4096 subcubes of size 125h-1Mpc, each representing a virtual random realization of the local universe. The measurements and their scatter over the subvolumes are compared to the theoretical predictions of Colombi, Bouchet & Schaeffer for P0, and of Szapudi & Colombi and Szapudi, Colombi & Bernardeau for the factorial moments and the cumulants. The general behaviour of experimental variance and cross-correlations as functions of scale and order is well described by theoretical predictions, with a few per cent accuracy in the weakly non-linear regime for the cosmic error on factorial moments. On highly non-linear scales, however, all variants of the hierarchical model used by SC and SCB to describe clustering appear to become increasingly approximate, which leads to a slight overestimation of the error, by about a factor of two in the worst case. Because of the needed supplementary perturbative approach, the theory is less accurate for non-linear estimators, such as cumulants, than for factorial moments. The cosmic bias is evaluated as well, and, in agreement with SCB, is found to be insignificant compared with the cosmic variance in all regimes investigated. While higher order statistics were previously evaluated in several simulations, this work presents textbook quality measurements of SNs, 3<=N<=10, in an unprecedented dynamic range of 0.05 <~ ψ <~ 50. In the weakly non-linear regime the results confirm previous findings and agree remarkably well with perturbation theory predictions including the one-loop corrections based on spherical collapse by Fosalba & Gaztañaga. Extended perturbation theory is confirmed on all scales.

  1. Unification theory of optimal life histories and linear demographic models in internal stochasticity.

    PubMed

    Oizumi, Ryo

    2014-01-01

    Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.

  2. Modules as Learning Tools in Linear Algebra

    ERIC Educational Resources Information Center

    Cooley, Laurel; Vidakovic, Draga; Martin, William O.; Dexter, Scott; Suzuki, Jeff; Loch, Sergio

    2014-01-01

    This paper reports on the experience of STEM and mathematics faculty at four different institutions working collaboratively to integrate learning theory with curriculum development in a core undergraduate linear algebra context. The faculty formed a Professional Learning Community (PLC) with a focus on learning theories in mathematics and…

  3. Model-Based Battery Management Systems: From Theory to Practice

    NASA Astrophysics Data System (ADS)

    Pathak, Manan

    Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.

  4. Signal Prediction With Input Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin

    1999-01-01

    A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.

  5. Nonlinear problems in data-assimilation : Can synchronization help?

    NASA Astrophysics Data System (ADS)

    Tribbia, J. J.; Duane, G. S.

    2009-12-01

    Over the past several years, operational weather centers have initiated ensemble prediction and assimilation techniques to estimate the error covariance of forecasts in the short and the medium range. The ensemble techniques used are based on linear methods. The theory This technique s been shown to be a useful indicator of skill in the linear range where forecast errors are small relative to climatological variance. While this advance has been impressive, there are still ad hoc aspects of its use in practice, like the need for covariance inflation which are troubling. Furthermore, to be of utility in the nonlinear range an ensemble assimilation and prediction method must be capable of giving probabilistic information for the situation where a probability density forecast becomes multi-modal. A prototypical, simplest example of such a situation is the planetary-wave regime transition where the pdf is bimodal. Our recent research show how the inconsistencies and extensions of linear methodology can be consistently treated using the paradigm of synchronization which views the problems of assimilation and forecasting as that of optimizing the forecast model state with respect to the future evolution of the atmosphere.

  6. Calculation of the distributed loads on the blades of individual multiblade propellers in axial flow using linear and nonlinear lifting surface theories

    NASA Technical Reports Server (NTRS)

    Pesetskaya, N. N.; Timofeev, I. YA.; Shipilov, S. D.

    1988-01-01

    In recent years much attention has been given to the development of methods and programs for the calculation of the aerodynamic characteristics of multiblade, saber-shaped air propellers. Most existing methods are based on the theory of lifting lines. Elsewhere, the theory of a lifting surface is used to calculate screw and lifting propellers. In this work, methods of discrete eddies are described for the calculation of the aerodynamic characteristics of propellers using the linear and nonlinear theories of lifting surfaces.

  7. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  8. A quantum description of linear, and non-linear optical interactions in arrays of plasmonic nanoparticles

    NASA Astrophysics Data System (ADS)

    Arabahmadi, Ehsan; Ahmadi, Zabihollah; Rashidian, Bizhan

    2018-06-01

    A quantum theory for describing the interaction of photons and plasmons, in one- and two-dimensional arrays is presented. Ohmic losses and inter-band transitions are not considered. We use macroscopic approach, and quantum field theory methods including S-matrix expansion, and Feynman diagrams for this purpose. Non-linear interactions are also studied, and increasing the probability of such interactions, and its application are also discussed.

  9. Estimating monotonic rates from biological data using local linear regression.

    PubMed

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  10. Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario

    NASA Astrophysics Data System (ADS)

    Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.

    1997-06-01

    In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.

  11. Anomaly Detection in Test Equipment via Sliding Mode Observers

    NASA Technical Reports Server (NTRS)

    Solano, Wanda M.; Drakunov, Sergey V.

    2012-01-01

    Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.

  12. Decentralization, stabilization, and estimation of large-scale linear systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Vukcevic, M. B.

    1976-01-01

    In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.

  13. Correlation, evaluation, and extension of linearized theories for tire motion and wheel shimmy

    NASA Technical Reports Server (NTRS)

    Smiley, Robert F

    1957-01-01

    An evaluation is made of the existing theories of a linearized tire motion and wheel shimmy. It is demonstrated that most of the previously published theories represent varying degrees of approximation to a summary theory developed in this report which is a minor modification of the basic theory of Von Schlippe and Dietrich. In most cases where strong differences exist between the previously published theories and summary theory, the previously published theories are shown to possess certain deficiencies. A series of systematic approximations to the summary theory is developed for the treatment of problems too simple to merit the use of the complete summary theory, and procedures are discussed for applying the summary theory and its systematic approximations to the shimmy of more complex landing-gear structures than have previously been considered. Comparisons of the existing experimental data with the predictions of the summary theory and the systematic approximations provide a fair substantiation of the more detailed approximate theories.

  14. Neural network approach to quantum-chemistry data: accurate prediction of density functional theory energies.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2009-08-21

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6+/-0.2 kcal mol(-1). In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  15. A Genomic Selection Index Applied to Simulated and Real Data

    PubMed Central

    Ceron-Rojas, J. Jesus; Crossa, José; Arief, Vivi N.; Basford, Kaye; Rutkoski, Jessica; Jarquín, Diego; Alvarado, Gregorio; Beyene, Yoseph; Semagn, Kassa; DeLacy, Ian

    2015-01-01

    A genomic selection index (GSI) is a linear combination of genomic estimated breeding values that uses genomic markers to predict the net genetic merit and select parents from a nonphenotyped testing population. Some authors have proposed a GSI; however, they have not used simulated or real data to validate the GSI theory and have not explained how to estimate the GSI selection response and the GSI expected genetic gain per selection cycle for the unobserved traits after the first selection cycle to obtain information about the genetic gains in each subsequent selection cycle. In this paper, we develop the theory of a GSI and apply it to two simulated and four real data sets with four traits. Also, we numerically compare its efficiency with that of the phenotypic selection index (PSI) by using the ratio of the GSI response over the PSI response, and the PSI and GSI expected genetic gain per selection cycle for observed and unobserved traits, respectively. In addition, we used the Technow inequality to compare GSI vs. PSI efficiency. Results from the simulated data were confirmed by the real data, indicating that GSI was more efficient than PSI per unit of time. PMID:26290571

  16. Fractional representation theory - Robustness results with applications to finite dimensional control of a class of linear distributed systems

    NASA Technical Reports Server (NTRS)

    Nett, C. N.; Jacobson, C. A.; Balas, M. J.

    1983-01-01

    This paper reviews and extends the fractional representation theory. In particular, new and powerful robustness results are presented. This new theory is utilized to develop a preliminary design methodology for finite dimensional control of a class of linear evolution equations on a Banach space. The design is for stability in an input-output sense, but particular attention is paid to internal stability as well.

  17. When is quasi-linear theory exact. [particle acceleration

    NASA Technical Reports Server (NTRS)

    Jones, F. C.; Birmingham, T. J.

    1975-01-01

    We use the cumulant expansion technique of Kubo (1962, 1963) to derive an integrodifferential equation for the average one-particle distribution function for particles being accelerated by electric and magnetic fluctuations of a general nature. For a very restricted class of fluctuations, the equation for this function degenerates exactly to a differential equation of Fokker-Planck type. Quasi-linear theory, including the adiabatic assumption, is an exact theory only for this limited class of fluctuations.

  18. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  19. A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Leonov, Arkady I.

    2002-01-01

    The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.

  20. Linear dependence between the wavefront gradient and the masked intensity for the point source with a CCD sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Ma, Liang; Wang, Bin

    2018-01-01

    In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.

  1. The "Chaos" Pattern in Piaget's Theory of Cognitive Development.

    ERIC Educational Resources Information Center

    Lindsay, Jean S.

    Piaget's theory of the cognitive development of the child is related to the recently developed non-linear "chaos" model. The term "chaos" refers to the tendency of dynamical, non-linear systems toward irregular, sometimes unpredictable, deterministic behavior. Piaget identified this same pattern in his model of cognitive…

  2. Are All Non-Linear Systems (Approx.) Bilinear,

    DTIC Science & Technology

    1977-06-01

    There is a rumor going around in mathematical system theory circles that all non-linear systems are bilinear or nearly so. This note examines the...case for such an assertion and finds it wanting and en passant, offers some comments on the current proliferation of mathematical literature on system theory .

  3. Linear kinetic theory and particle transport in stochastic mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pomraning, G.C.

    We consider the formulation of linear transport and kinetic theory describing energy and particle flow in a random mixture of two or more immiscible materials. Following an introduction, we summarize early and fundamental work in this area, and we conclude with a brief discussion of recent results.

  4. A prediction model for cognitive performance in health ageing using diffusion tensor imaging with graph theory.

    PubMed

    Yun, Ruijuan; Lin, Chung-Chih; Wu, Shuicai; Huang, Chu-Chung; Lin, Ching-Po; Chao, Yi-Ping

    2013-01-01

    In this study, we employed diffusion tensor imaging (DTI) to construct brain structural network and then derive the connection matrices from 96 healthy elderly subjects. The correlation analysis between these topological properties of network based on graph theory and the Cognitive Abilities Screening Instrument (CASI) index were processed to extract the significant network characteristics. These characteristics were then integrated to estimate the models by various machine-learning algorithms to predict user's cognitive performance. From the results, linear regression model and Gaussian processes model showed presented better abilities with lower mean absolute errors of 5.8120 and 6.25 to predict the cognitive performance respectively. Moreover, these extracted topological properties of brain structural network derived from DTI also could be regarded as the bio-signatures for further evaluation of brain degeneration in healthy aged and early diagnosis of mild cognitive impairment (MCI).

  5. Chaos control in delayed phase space constructed by the Takens embedding theory

    NASA Astrophysics Data System (ADS)

    Hajiloo, R.; Salarieh, H.; Alasty, A.

    2018-01-01

    In this paper, the problem of chaos control in discrete-time chaotic systems with unknown governing equations and limited measurable states is investigated. Using the time-series of only one measurable state, an algorithm is proposed to stabilize unstable fixed points. The approach consists of three steps: first, using Takens embedding theory, a delayed phase space preserving the topological characteristics of the unknown system is reconstructed. Second, a dynamic model is identified by recursive least squares method to estimate the time-series data in the delayed phase space. Finally, based on the reconstructed model, an appropriate linear delayed feedback controller is obtained for stabilizing unstable fixed points of the system. Controller gains are computed using a systematic approach. The effectiveness of the proposed algorithm is examined by applying it to the generalized hyperchaotic Henon system, prey-predator population map, and the discrete-time Lorenz system.

  6. A computational approach to the relationship between radiation induced double strand breaks and translocations

    NASA Technical Reports Server (NTRS)

    Holley, W. R.; Chatterjee, A.

    1994-01-01

    A theoretical framework is presented which provides a quantitative analysis of radiation induced translocations between the ab1 oncogene on CH9q34 and a breakpoint cluster region, bcr, on CH 22q11. Such translocations are associated frequently with chronic myelogenous leukemia. The theory is based on the assumption that incorrect or unfaithful rejoining of initial double strand breaks produced concurrently within the 200 kbp intron region upstream of the second abl exon, and the 16.5 kbp region between bcr exon 2 and exon 6 interact with each other, resulting in a fusion gene. for an x-ray dose of 100 Gy, there is good agreement between the theoretical estimate and the one available experimental result. The theory has been extended to provide dose response curves for these types of translocations. These curves are quadratic at low doses and become linear at high doses.

  7. Radiative transfer modelling inside thermal protection system using hybrid homogenization method for a backward Monte Carlo method coupled with Mie theory

    NASA Astrophysics Data System (ADS)

    Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.

    2012-06-01

    A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.

  8. Incommensurate phase of a triangular frustrated Heisenberg model studied via Schwinger-boson mean-field theory

    NASA Astrophysics Data System (ADS)

    Li, Peng; Su, Haibin; Dong, Hui-Ning; Shen, Shun-Qing

    2009-08-01

    We study a triangular frustrated antiferromagnetic Heisenberg model with nearest-neighbor interactions J1 and third-nearest-neighbor interactions J3 by means of Schwinger-boson mean-field theory. By setting an antiferromagnetic J3 and varying J1 from positive to negative values, we disclose the low-temperature features of its interesting incommensurate phase. The gapless dispersion of quasiparticles leads to the intrinsic T2 law of specific heat. The magnetic susceptibility is linear in temperature. The local magnetization is significantly reduced by quantum fluctuations. We address possible relevance of these results to the low-temperature properties of NiGa2S4. From a careful analysis of the incommensurate spin wavevector, the interaction parameters are estimated as J1≈-3.8755 K and J3≈14.0628 K, in order to account for the experimental data.

  9. Direct adaptive control of manipulators in Cartesian space

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.

  10. The application of an atomistic J-integral to a ductile crack.

    PubMed

    Zimmerman, Jonathan A; Jones, Reese E

    2013-04-17

    In this work we apply a Lagrangian kernel-based estimator of continuum fields to atomic data to estimate the J-integral for the emission dislocations from a crack tip. Face-centered cubic (fcc) gold and body-centered cubic (bcc) iron modeled with embedded atom method (EAM) potentials are used as example systems. The results of a single crack with a K-loading compare well to an analytical solution from anisotropic linear elastic fracture mechanics. We also discovered that in the post-emission of dislocations from the crack tip there is a loop size-dependent contribution to the J-integral. For a system with a finite width crack loaded in simple tension, the finite size effects for the systems that were feasible to compute prevented precise agreement with theory. However, our results indicate that there is a trend towards convergence.

  11. Path Integral Computation of Quantum Free Energy Differences Due to Alchemical Transformations Involving Mass and Potential.

    PubMed

    Pérez, Alejandro; von Lilienfeld, O Anatole

    2011-08-09

    Thermodynamic integration, perturbation theory, and λ-dynamics methods were applied to path integral molecular dynamics calculations to investigate free energy differences due to "alchemical" transformations. Several estimators were formulated to compute free energy differences in solvable model systems undergoing changes in mass and/or potential. Linear and nonlinear alchemical interpolations were used for the thermodynamic integration. We find improved convergence for the virial estimators, as well as for the thermodynamic integration over nonlinear interpolation paths. Numerical results for the perturbative treatment of changes in mass and electric field strength in model systems are presented. We used thermodynamic integration in ab initio path integral molecular dynamics to compute the quantum free energy difference of the isotope transformation in the Zundel cation. The performance of different free energy methods is discussed.

  12. Estimating the mutual information of an EEG-based Brain-Computer Interface.

    PubMed

    Schlögl, A; Neuper, C; Pfurtscheller, G

    2002-01-01

    An EEG-based Brain-Computer Interface (BCI) could be used as an additional communication channel between human thoughts and the environment. The efficacy of such a BCI depends mainly on the transmitted information rate. Shannon's communication theory was used to quantify the information rate of BCI data. For this purpose, experimental EEG data from four BCI experiments was analyzed off-line. Subjects imaginated left and right hand movements during EEG recording from the sensorimotor area. Adaptive autoregressive (AAR) parameters were used as features of single trial EEG and classified with linear discriminant analysis. The intra-trial variation as well as the inter-trial variability, the signal-to-noise ratio, the entropy of information, and the information rate were estimated. The entropy difference was used as a measure of the separability of two classes of EEG patterns.

  13. Reachability Analysis Applied to Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Holzinger, M.; Scheeres, D.

    Several existing and emerging applications of Space Situational Awareness (SSA) relate directly to spacecraft Rendezvous, Proximity Operations, and Docking (RPOD) and Formation / Cluster Flight (FCF). When multiple Resident Space Ob jects (RSOs) are in vicinity of one another with appreciable periods between observations, correlating new RSO tracks to previously known objects becomes a non-trivial problem. A particularly difficult sub-problem is seen when long breaks in observations are coupled with continuous, low- thrust maneuvers. Reachability theory, directly related to optimal control theory, can compute contiguous reachability sets for known or estimated control authority and can support such RSO search and correlation efforts in both ground and on-board settings. Reachability analysis can also directly estimate the minimum control authority of a given RSO. For RPOD and FCF applications, emerging mission concepts such as fractionation drastically increase system complexity of on-board autonomous fault management systems. Reachability theory, as applied to SSA in RPOD and FCF applications, can involve correlation of nearby RSO observations, control authority estimation, and sensor track re-acquisition. Additional uses of reachability analysis are formation reconfiguration, worst-case passive safety, and propulsion failure modes such as a "stuck" thruster. Existing reachability theory is applied to RPOD and FCF regimes. An optimal control policy is developed to maximize the reachability set and optimal control law discontinuities (switching) are examined. The Clohessy-Wiltshire linearized equations of motion are normalized to accentuate relative control authority for spacecraft propulsion systems at both Low Earth Orbit (LEO) and Geostationary Earth Orbit (GEO). Several examples with traditional and low thrust propulsion systems in LEO and GEO are explored to illustrate the effects of relative control authority on the time-varying reachability set surface. Both monopropellant spacecraft at LEO and Hall thruster spacecraft at GEO are shown to be strongly actuated while Hall thruster spacecraft at LEO are found to be weakly actuated. Weaknesses with the current implementation are discussed and future numerical improvements and analytical efforts are discussed.

  14. Structure and superconductivity in the ternary silicide CaAlSi

    NASA Astrophysics Data System (ADS)

    Ma, Rong; Huang, Gui-Qin; Liu, Mei

    2007-06-01

    Using the linear response-linearized Muffin-tin orbital (LR-LMTO) method, we study the electronic band structure, phonon spectra, electron-phonon coupling and superconductivity for c-axis ferromagnetic-like (F-like) and antiferromagnetic-like (AF-like) structures in ternary silicide CaAlSi. The following conclusions are drawn from our calculations. If Al and Si atoms are assumed to arrange along the c axis in an F-like long-range ordering (-Al-Al-Al-and-Si-Si-Si-), one could obtain the ultrasoft B1g phonon mode and thus very strong electron-phonon coupling in CaAlSi. However, the appearance of imaginary frequency phonon modes indicates the instability of such a structure. For Al and Si atoms arranging along the c axis in an AF-like long-range ordering (-Al-Si-Al-), the calculated electron-phonon coupling constant is equal to 0.8 and the logarithmically averaged frequency is 146.8 K. This calculated result can correctly yield the superconducting transition temperature of CaAlSi by the standard BCS theory in the moderate electron-phonon coupling strength. We propose that an AF-like superlattice model for Al (or Si) atoms along the c direction may mediate the inconsistency estimated from theory and experiment, and explain the anomalous superconductivity in CaAlSi.

  15. Practical methods for dealing with 'not applicable' item responses in the AMC Linear Disability Score project

    PubMed Central

    Holman, Rebecca; Glas, Cees AW; Lindeboom, Robert; Zwinderman, Aeilko H; de Haan, Rob J

    2004-01-01

    Background Whenever questionnaires are used to collect data on constructs, such as functional status or health related quality of life, it is unlikely that all respondents will respond to all items. This paper examines ways of dealing with responses in a 'not applicable' category to items included in the AMC Linear Disability Score (ALDS) project item bank. Methods The data examined in this paper come from the responses of 392 respondents to 32 items and form part of the calibration sample for the ALDS item bank. The data are analysed using the one-parameter logistic item response theory model. The four practical strategies for dealing with this type of response are: cold deck imputation; hot deck imputation; treating the missing responses as if these items had never been offered to those individual patients; and using a model which takes account of the 'tendency to respond to items'. Results The item and respondent population parameter estimates were very similar for the strategies involving hot deck imputation; treating the missing responses as if these items had never been offered to those individual patients; and using a model which takes account of the 'tendency to respond to items'. The estimates obtained using the cold deck imputation method were substantially different. Conclusions The cold deck imputation method was not considered suitable for use in the ALDS item bank. The other three methods described can be usefully implemented in the ALDS item bank, depending on the purpose of the data analysis to be carried out. These three methods may be useful for other data sets examining similar constructs, when item response theory based methods are used. PMID:15200681

  16. Resource Theory of Superposition

    NASA Astrophysics Data System (ADS)

    Theurer, T.; Killoran, N.; Egloff, D.; Plenio, M. B.

    2017-12-01

    The superposition principle lies at the heart of many nonclassical properties of quantum mechanics. Motivated by this, we introduce a rigorous resource theory framework for the quantification of superposition of a finite number of linear independent states. This theory is a generalization of resource theories of coherence. We determine the general structure of operations which do not create superposition, find a fundamental connection to unambiguous state discrimination, and propose several quantitative superposition measures. Using this theory, we show that trace decreasing operations can be completed for free which, when specialized to the theory of coherence, resolves an outstanding open question and is used to address the free probabilistic transformation between pure states. Finally, we prove that linearly independent superposition is a necessary and sufficient condition for the faithful creation of entanglement in discrete settings, establishing a strong structural connection between our theory of superposition and entanglement theory.

  17. Linear network representation of multistate models of transport.

    PubMed Central

    Sandblom, J; Ring, A; Eisenman, G

    1982-01-01

    By introducing external driving forces in rate-theory models of transport we show how the Eyring rate equations can be transformed into Ohm's law with potentials that obey Kirchhoff's second law. From such a formalism the state diagram of a multioccupancy multicomponent system can be directly converted into linear network with resistors connecting nodal (branch) points and with capacitances connecting each nodal point with a reference point. The external forces appear as emf or current generators in the network. This theory allows the algebraic methods of linear network theory to be used in solving the flux equations for multistate models and is particularly useful for making proper simplifying approximation in models of complex membrane structure. Some general properties of linear network representation are also deduced. It is shown, for instance, that Maxwell's reciprocity relationships of linear networks lead directly to Onsager's relationships in the near equilibrium region. Finally, as an example of the procedure, the equivalent circuit method is used to solve the equations for a few transport models. PMID:7093425

  18. On some problems in a theory of thermally and mechanically interacting continuous media. Ph.D. Thesis; [linearized theory of interacting mixture of elastic solid and viscous fluid

    NASA Technical Reports Server (NTRS)

    Lee, Y. M.

    1971-01-01

    Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.

  19. Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.

    PubMed

    Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon

    2017-05-01

    Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.

  20. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  1. Linear-response time-dependent density-functional theory with pairing fields.

    PubMed

    Peng, Degao; van Aggelen, Helen; Yang, Yang; Yang, Weitao

    2014-05-14

    Recent development in particle-particle random phase approximation (pp-RPA) broadens the perspective on ground state correlation energies [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013), Y. Yang, H. van Aggelen, S. N. Steinmann, D. Peng, and W. Yang, J. Chem. Phys. 139, 174110 (2013); D. Peng, S. N. Steinmann, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 104112 (2013)] and N ± 2 excitation energies [Y. Yang, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 224105 (2013)]. So far Hartree-Fock and approximated density-functional orbitals have been utilized to evaluate the pp-RPA equation. In this paper, to further explore the fundamentals and the potential use of pairing matrix dependent functionals, we present the linear-response time-dependent density-functional theory with pairing fields with both adiabatic and frequency-dependent kernels. This theory is related to the density-functional theory and time-dependent density-functional theory for superconductors, but is applied to normal non-superconducting systems for our purpose. Due to the lack of the proof of the one-to-one mapping between the pairing matrix and the pairing field for time-dependent systems, the linear-response theory is established based on the representability assumption of the pairing matrix. The linear response theory justifies the use of approximated density-functionals in the pp-RPA equation. This work sets the fundamentals for future density-functional development to enhance the description of ground state correlation energies and N ± 2 excitation energies.

  2. Estimating net joint torques from kinesiological data using optimal linear system theory.

    PubMed

    Runge, C F; Zajac, F E; Allum, J H; Risher, D W; Bryson, A E; Honegger, F

    1995-12-01

    Net joint torques (NJT) are frequently computed to provide insights into the motor control of dynamic biomechanical systems. An inverse dynamics approach is almost always used, whereby the NJT are computed from 1) kinematic measurements (e.g., position of the segments), 2) kinetic measurements (e.g., ground reaction forces) that are, in effect, constraints defining unmeasured kinematic quantities based on a dynamic segmental model, and 3) numerical differentiation of the measured kinematics to estimate velocities and accelerations that are, in effect, additional constraints. Due to errors in the measurements, the segmental model, and the differentiation process, estimated NJT rarely produce the observed movement in a forward simulation when the dynamics of the segmental system are inherently unstable (e.g., human walking). Forward dynamic simulations are, however, essential to studies of muscle coordination. We have developed an alternative approach, using the linear quadratic follower (LQF) algorithm, which computes the NJT such that a stable simulation of the observed movement is produced and the measurements are replicated as well as possible. The LQF algorithm does not employ constraints depending on explicit differentiation of the kinematic data, but rather employs those depending on specification of a cost function, based on quantitative assumptions about data confidence. We illustrate the usefulness of the LQF approach by using it to estimate NJT exerted by standing humans perturbed by support-surface movements. We show that unless the number of kinematic and force variables recorded is sufficiently high, the confidence that can be placed in the estimates of the NJT, obtained by any method (e.g., LQF, or the inverse dynamics approach), may be unsatisfactorily low.

  3. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.; Everitt, B.S.; Howell, D.C.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  4. Ocean mixing beneath Pine Island Glacier ice shelf, West Antarctica

    NASA Astrophysics Data System (ADS)

    Kimura, Satoshi; Jenkins, Adrian; Dutrieux, Pierre; Forryan, Alexander; Naveira Garabato, Alberto C.; Firing, Yvonne

    2016-12-01

    Ice shelves around Antarctica are vulnerable to an increase in ocean-driven melting, with the melt rate depending on ocean temperature and the strength of flow inside the ice-shelf cavities. We present measurements of velocity, temperature, salinity, turbulent kinetic energy dissipation rate, and thermal variance dissipation rate beneath Pine Island Glacier ice shelf, West Antarctica. These measurements were obtained by CTD, ADCP, and turbulence sensors mounted on an Autonomous Underwater Vehicle (AUV). The highest turbulent kinetic energy dissipation rate is found near the grounding line. The thermal variance dissipation rate increases closer to the ice-shelf base, with a maximum value found ˜0.5 m away from the ice. The measurements of turbulent kinetic energy dissipation rate near the ice are used to estimate basal melting of the ice shelf. The dissipation-rate-based melt rate estimates is sensitive to the stability correction parameter in the linear approximation of universal function of the Monin-Obukhov similarity theory for stratified boundary layers. We argue that our estimates of basal melting from dissipation rates are within a range of previous estimates of basal melting.

  5. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  6. Estimation and Partitioning of Heritability in Human Populations using Whole Genome Analysis Methods

    PubMed Central

    Vinkhuyzen, Anna AE; Wray, Naomi R; Yang, Jian; Goddard, Michael E; Visscher, Peter M

    2014-01-01

    Understanding genetic variation of complex traits in human populations has moved from the quantification of the resemblance between close relatives to the dissection of genetic variation into the contributions of individual genomic loci. But major questions remain unanswered: how much phenotypic variation is genetic, how much of the genetic variation is additive and what is the joint distribution of effect size and allele frequency at causal variants? We review and compare three whole-genome analysis methods that use mixed linear models (MLM) to estimate genetic variation, using the relationship between close or distant relatives based on pedigree or SNPs. We discuss theory, estimation procedures, bias and precision of each method and review recent advances in the dissection of additive genetic variation of complex traits in human populations that are based upon the application of MLM. Using genome wide data, SNPs account for far more of the genetic variation than the highly significant SNPs associated with a trait, but they do not account for all of the genetic variance estimated by pedigree based methods. We explain possible reasons for this ‘missing’ heritability. PMID:23988118

  7. Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busti, V.C.; Clarkson, C.; Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za

    2013-11-01

    Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbationmore » theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.« less

  8. Estimating the electron energy distribution during ionospheric modification from spectrographic airglow measurements

    NASA Astrophysics Data System (ADS)

    Hysell, D. L.; Varney, R. H.; Vlasov, M. N.; Nossa, E.; Watkins, B.; Pedersen, T.; Huba, J. D.

    2012-02-01

    The electron energy distribution during an F region ionospheric modification experiment at the HAARP facility near Gakona, Alaska, is inferred from spectrographic airglow emission data. Emission lines at 630.0, 557.7, and 844.6 nm are considered along with the absence of detectable emissions at 427.8 nm. Estimating the electron energy distribution function from the airglow data is a problem in classical linear inverse theory. We describe an augmented version of the method of Backus and Gilbert which we use to invert the data. The method optimizes the model resolution, the precision of the mapping between the actual electron energy distribution and its estimate. Here, the method has also been augmented so as to limit the model prediction error. Model estimates of the suprathermal electron energy distribution versus energy and altitude are incorporated in the inverse problem formulation as representer functions. Our methodology indicates a heater-induced electron energy distribution with a broad peak near 5 eV that decreases approximately exponentially by 30 dB between 5-50 eV.

  9. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  10. Statistical analysis of the calibration procedure for personnel radiation measurement instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, W.J.; Bengston, S.J.; Kalbeitzer, F.L.

    1980-11-01

    Thermoluminescent analyzer (TLA) calibration procedures were used to estimate personnel radiation exposure levels at the Idaho National Engineering Laboratory (INEL). A statistical analysis is presented herein based on data collected over a six month period in 1979 on four TLA's located in the Department of Energy (DOE) Radiological and Environmental Sciences Laboratory at the INEL. The data were collected according to the day-to-day procedure in effect at that time. Both gamma and beta radiation models are developed. Observed TLA readings of thermoluminescent dosimeters are correlated with known radiation levels. This correlation is then used to predict unknown radiation doses frommore » future analyzer readings of personnel thermoluminescent dosimeters. The statistical techniques applied in this analysis include weighted linear regression, estimation of systematic and random error variances, prediction interval estimation using Scheffe's theory of calibration, the estimation of the ratio of the means of two normal bivariate distributed random variables and their corresponding confidence limits according to Kendall and Stuart, tests of normality, experimental design, a comparison between instruments, and quality control.« less

  11. Improving stochastic estimates with inference methods: calculating matrix diagonals.

    PubMed

    Selig, Marco; Oppermann, Niels; Ensslin, Torsten A

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. © 2012 American Physical Society

  12. Variable Selection for Support Vector Machines in Moderately High Dimensions

    PubMed Central

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    2015-01-01

    Summary The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to infinity. In this work, we establish a unified theory for a general class of nonconvex penalized SVMs. We first prove that in ultra-high dimensions, there exists one local minimizer to the objective function of nonconvex penalized SVMs possessing the desired oracle property. We further address the problem of nonunique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultra-high dimensional setting if an appropriate initial estimator is available. This condition on initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence. PMID:26778916

  13. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  14. Estimation of images degraded by film-grain noise.

    PubMed

    Naderi, F; Sawchuk, A A

    1978-04-15

    Film-grain noise describes the intrinsic noise produced by a photographic emulsion during the process of image recording and reproduction. In this paper we consider the restoration of images degraded by film-grain noise. First a detailed model for the over-all photographic imaging system is presented. The model includes linear blurring effects and the signal-dependent effect of film-grain noise. The accuracy of this model is tested by simulating images according to it and comparing the results to images of similar targets that were actually recorded on film. The restoration of images degraded by film-grain noise is then considered in the context of estimation theory. A discrete Wiener filer is developed which explicitly allows for the signal dependence of the noise. The filter adaptively alters its characteristics based on the nonstationary first order statistics of an image and is shown to have advantages over the conventional Wiener filter. Experimental results for modeling and the adaptive estimation filter are presented.

  15. Lenses in the forest: cross correlation of the Lyman-alpha flux with cosmic microwave background lensing.

    PubMed

    Vallinotto, Alberto; Das, Sudeep; Spergel, David N; Viel, Matteo

    2009-08-28

    We present a theoretical estimate for a new observable: the cross correlation between the Lyman-alpha flux fluctuations in quasar spectra and the convergence of the cosmic microwave background as measured along the same line of sight. As a first step toward the assessment of its detectability, we estimate the signal-to-noise ratio using linear theory. Although the signal-to-noise is small for a single line of sight and peaks at somewhat smaller redshifts than those probed by the Lyman-alpha forest, we estimate a total signal-to-noise of 9 for cross correlating quasar spectra of SDSS-III with Planck and 20 for cross correlating with a future polarization based cosmic microwave background experiment. The detection of this effect would be a direct measure of the neutral hydrogen-matter cross correlation and could provide important information on the growth of structures at large scales in a redshift range which is still poorly probed.

  16. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  17. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  18. Direct perturbation theory for the dark soliton solution to the nonlinear Schroedinger equation with normal dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Jialu; Yang Chunnuan; Cai Hao

    2007-04-15

    After finding the basic solutions of the linearized nonlinear Schroedinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  19. Adaptive Control of Linear Modal Systems Using Residual Mode Filters and a Simple Disturbance Estimator

    NASA Technical Reports Server (NTRS)

    Balas, Mark; Frost, Susan

    2012-01-01

    Flexible structures containing a large number of modes can benefit from adaptive control techniques which are well suited to applications that have unknown modeling parameters and poorly known operating conditions. In this paper, we focus on a direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend our adaptive control theory to accommodate troublesome modal subsystems of a plant that might inhibit the adaptive controller. In some cases the plant does not satisfy the requirements of Almost Strict Positive Realness. Instead, there maybe be a modal subsystem that inhibits this property. This section will present new results for our adaptive control theory. We will modify the adaptive controller with a Residual Mode Filter (RMF) to compensate for the troublesome modal subsystem, or the Q modes. Here we present the theory for adaptive controllers modified by RMFs, with attention to the issue of disturbances propagating through the Q modes. We apply the theoretical results to a flexible structure example to illustrate the behavior with and without the residual mode filter.

  20. Laboratory testing the Anaconda.

    PubMed

    Chaplin, J R; Heller, V; Farley, F J M; Hearn, G E; Rainey, R C T

    2012-01-28

    Laboratory measurements of the performance of the Anaconda are presented, a wave energy converter comprising a submerged water-filled distensible tube aligned with the incident waves. Experiments were carried out at a scale of around 1:25 with a 250 mm diameter and 7 m long tube, constructed of rubber and fabric, terminating in a linear power take-off of adjustable impedance. The paper presents some basic theory that leads to predictions of distensibility and bulge wave speed in a pressurized compound rubber and fabric tube, including the effects of inelastic sectors in the circumference, longitudinal tension and the surrounding fluid. Results are shown to agree closely with measurements in still water. The theory is developed further to provide a model for the propagation of bulges and power conversion in the Anaconda. In the presence of external water waves, the theory identifies three distinct internal wave components and provides theoretical estimates of power capture. For the first time, these and other predictions of the behaviour of the Anaconda, a device unlike almost all other marine systems, are shown to be in remarkably close agreement with measurements.

  1. Dark energy, non-minimal couplings and the origin of cosmic magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiménez, Jose Beltrán; Maroto, Antonio L., E-mail: jobeltra@fis.ucm.es, E-mail: maroto@fis.ucm.es

    2010-12-01

    In this work we consider the most general electromagnetic theory in curved space-time leading to linear second order differential equations, including non-minimal couplings to the space-time curvature. We assume the presence of a temporal electromagnetic background whose energy density plays the role of dark energy, as has been recently suggested. Imposing the consistency of the theory in the weak-field limit, we show that it reduces to standard electromagnetism in the presence of an effective electromagnetic current which is generated by the momentum density of the matter/energy distribution, even for neutral sources. This implies that in the presence of dark energy,more » the motion of large-scale structures generates magnetic fields. Estimates of the present amplitude of the generated seed fields for typical spiral galaxies could reach 10{sup −9} G without any amplification. In the case of compact rotating objects, the theory predicts their magnetic moments to be related to their angular momenta in the way suggested by the so called Schuster-Blackett conjecture.« less

  2. On the use of a physically-based baseflow timescale in land surface models.

    NASA Astrophysics Data System (ADS)

    Jost, A.; Schneider, A. C.; Oudin, L.; Ducharne, A.

    2017-12-01

    Groundwater discharge is an important component of streamflow and estimating its spatio-temporal variation in response to changes in recharge is of great value to water resource planning, and essential for modelling accurate large scale water balance in land surface models (LSMs). First-order representation of groundwater as a single linear storage element is frequently used in LSMs for the sake of simplicity, but requires a suitable parametrization of the aquifer hydraulic behaviour in the form of the baseflow characteristic timescale (τ). Such a modelling approach can be hampered by the lack of available calibration data at global scale. Hydraulic groundwater theory provides an analytical framework to relate the baseflow characteristics to catchment descriptors. In this study, we use the long-time solution of the linearized Boussinesq equation to estimate τ at global scale, as a function of groundwater flow length and aquifer hydraulic diffusivity. Our goal is to evaluate the use of this spatially variable and physically-based τ in the ORCHIDEE surface model in terms of simulated river discharges across large catchments. Aquifer transmissivity and drainable porosity stem from GLHYMPS high-resolution datasets whereas flow length is derived from an estimation of drainage density, using the GRIN global river network. ORCHIDEE is run in offline mode and its results are compared to a reference simulation using an almost spatially constant topographic-dependent τ. We discuss the limits of our approach in terms of both the relevance and accuracy of global estimates of aquifer hydraulic properties and the extent to which the underlying assumptions in the analytical method are valid.

  3. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  4. Near real-time estimation of the seismic source parameters in a compressed domain

    NASA Astrophysics Data System (ADS)

    Rodriguez, Ismael A. Vera

    Seismic events can be characterized by its origin time, location and moment tensor. Fast estimations of these source parameters are important in areas of geophysics like earthquake seismology, and the monitoring of seismic activity produced by volcanoes, mining operations and hydraulic injections in geothermal and oil and gas reservoirs. Most available monitoring systems estimate the source parameters in a sequential procedure: first determining origin time and location (e.g., epicentre, hypocentre or centroid of the stress glut density), and then using this information to initialize the evaluation of the moment tensor. A more efficient estimation of the source parameters requires a concurrent evaluation of the three variables. The main objective of the present thesis is to address the simultaneous estimation of origin time, location and moment tensor of seismic events. The proposed method displays the benefits of being: 1) automatic, 2) continuous and, depending on the scale of application, 3) of providing results in real-time or near real-time. The inversion algorithm is based on theoretical results from sparse representation theory and compressive sensing. The feasibility of implementation is determined through the analysis of synthetic and real data examples. The numerical experiments focus on the microseismic monitoring of hydraulic fractures in oil and gas wells, however, an example using real earthquake data is also presented for validation. The thesis is complemented with a resolvability analysis of the moment tensor. The analysis targets common monitoring geometries employed in hydraulic fracturing in oil wells. Additionally, it is presented an application of sparse representation theory for the denoising of one-component and three-component microseismicity records, and an algorithm for improved automatic time-picking using non-linear inversion constraints.

  5. Time-Dependent Thermal Transport Theory.

    PubMed

    Biele, Robert; D'Agosta, Roberto; Rubio, Angel

    2015-07-31

    Understanding thermal transport in nanoscale systems presents important challenges to both theory and experiment. In particular, the concept of local temperature at the nanoscale appears difficult to justify. Here, we propose a theoretical approach where we replace the temperature gradient with controllable external blackbody radiations. The theory recovers known physical results, for example, the linear relation between the thermal current and the temperature difference of two blackbodies. Furthermore, our theory is not limited to the linear regime and goes beyond accounting for nonlinear effects and transient phenomena. Since the present theory is general and can be adapted to describe both electron and phonon dynamics, it provides a first step toward a unified formalism for investigating thermal and electronic transport.

  6. Diffusion by one wave and by many waves

    NASA Astrophysics Data System (ADS)

    Albert, J. M.

    2010-03-01

    Radiation belt electrons and chorus waves are an outstanding instance of the important role cyclotron resonant wave-particle interactions play in the magnetosphere. Chorus waves are particularly complex, often occurring with large amplitude, narrowband but drifting frequency and fine structure. Nevertheless, modeling their effect on radiation belt electrons with bounce-averaged broadband quasi-linear theory seems to yield reasonable results. It is known that coherent interactions with monochromatic waves can cause particle diffusion, as well as radically different phase bunching and phase trapping behavior. Here the two formulations of diffusion, while conceptually different, are shown to give identical diffusion coefficients, in the narrowband limit of quasi-linear theory. It is further shown that suitably averaging the monochromatic diffusion coefficients over frequency and wave normal angle parameters reproduces the full broadband quasi-linear results. This may account for the rather surprising success of quasi-linear theory in modeling radiation belt electrons undergoing diffusion by chorus waves.

  7. Structural Equation Models in a Redundancy Analysis Framework With Covariates.

    PubMed

    Lovaglio, Pietro Giorgio; Vittadini, Giorgio

    2014-01-01

    A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.

  8. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  9. Density-dependent host choice by disease vectors: epidemiological implications of the ideal free distribution.

    PubMed

    Basáñez, María-Gloria; Razali, Karina; Renz, Alfons; Kelly, David

    2007-03-01

    The proportion of vector blood meals taken on humans (the human blood index, h) appears as a squared term in classical expressions of the basic reproduction ratio (R(0)) for vector-borne infections. Consequently, R(0) varies non-linearly with h. Estimates of h, however, constitute mere snapshots of a parameter that is predicted, from evolutionary theory, to vary with vector and host abundance. We test this prediction using a population dynamics model of river blindness assuming that, before initiation of vector control or chemotherapy, recorded measures of vector density and human infection accurately represent endemic equilibrium. We obtain values of h that satisfy the condition that the effective reproduction ratio (R(e)) must equal 1 at equilibrium. Values of h thus obtained decrease with vector density, decrease with the vector:human ratio and make R(0) respond non-linearly rather than increase linearly with vector density. We conclude that if vectors are less able to obtain human blood meals as their density increases, antivectorial measures may not lead to proportional reductions in R(0) until very low vector levels are achieved. Density dependence in the contact rate of infectious diseases transmitted by insects may be an important non-linear process with implications for their epidemiology and control.

  10. Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models

    NASA Technical Reports Server (NTRS)

    Buchert, T.; Melott, A. L.; Weiss, A. G.

    1993-01-01

    We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk

    We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shownmore » to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.« less

  12. On the interaction of small-scale linear waves with nonlinear solitary waves

    NASA Astrophysics Data System (ADS)

    Xu, Chengzhu; Stastna, Marek

    2017-04-01

    In the study of environmental and geophysical fluid flows, linear wave theory is well developed and its application has been considered for phenomena of various length and time scales. However, due to the nonlinear nature of fluid flows, in many cases results predicted by linear theory do not agree with observations. One of such cases is internal wave dynamics. While small-amplitude wave motion may be approximated by linear theory, large amplitude waves tend to be solitary-like. In some cases, when the wave is highly nonlinear, even weakly nonlinear theories fail to predict the wave properties correctly. We study the interaction of small-scale linear waves with nonlinear solitary waves using highly accurate pseudo spectral simulations that begin with a fully nonlinear solitary wave and a train of small-amplitude waves initialized from linear waves. The solitary wave then interacts with the linear waves through either an overtaking collision or a head-on collision. During the collision, there is a net energy transfer from the linear wave train to the solitary wave, resulting in an increase in the kinetic energy carried by the solitary wave and a phase shift of the solitary wave with respect to a freely propagating solitary wave. At the same time the linear waves are greatly reduced in amplitude. The percentage of energy transferred depends primarily on the wavelength of the linear waves. We found that after one full collision cycle, the longest waves may retain as much as 90% of the kinetic energy they had initially, while the shortest waves lose almost all of their initial energy. We also found that a head-on collision is more efficient in destroying the linear waves than an overtaking collision. On the other hand, the initial amplitude of the linear waves has very little impact on the percentage of energy that can be transferred to the solitary wave. Because of the nonlinearity of the solitary wave, these results provide us some insight into wave-mean flow interaction in a fully nonlinear framework.

  13. State Estimation for Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    21 2.2.1 Linear Inverted Pendulum Model . . . . . . . . . . . . . . . . . . . 21 2.2.2 Planar Five-link Model...Linear Inverted Pendulum Model. LVDT Linear Variable Differential Transformers. MEMS Microelectromechanical Systems. MHE Moving Horizon Estimator. QP...

  14. Direct perturbation theory for the dark soliton solution to the nonlinear Schrödinger equation with normal dispersion.

    PubMed

    Yu, Jia-Lu; Yang, Chun-Nuan; Cai, Hao; Huang, Nian-Ning

    2007-04-01

    After finding the basic solutions of the linearized nonlinear Schrödinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  15. Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity

    PubMed Central

    Oizumi, Ryo

    2014-01-01

    Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258

  16. Excited states with internally contracted multireference coupled-cluster linear response theory.

    PubMed

    Samanta, Pradipta Kumar; Mukherjee, Debashis; Hanauer, Matthias; Köhn, Andreas

    2014-04-07

    In this paper, the linear response (LR) theory for the variant of internally contracted multireference coupled cluster (ic-MRCC) theory described by Hanauer and Köhn [J. Chem. Phys. 134, 204211 (2011)] has been formulated and implemented for the computation of the excitation energies relative to a ground state of pronounced multireference character. We find that straightforward application of the linear-response formalism to the time-averaged ic-MRCC Lagrangian leads to unphysical second-order poles. However, the coupling matrix elements that cause this behavior are shown to be negligible whenever the internally contracted approximation as such is justified. Hence, for the numerical implementation of the method, we adopt a Tamm-Dancoff-type approximation and neglect these couplings. This approximation is also consistent with an equation-of-motion based derivation, which neglects these couplings right from the start. We have implemented the linear-response approach in the ic-MRCC singles-and-doubles framework and applied our method to calculate excitation energies for a number of molecules ranging from CH2 to p-benzyne and conjugated polyenes (up to octatetraene). The computed excitation energies are found to be very accurate, even for the notoriously difficult case of doubly excited states. The ic-MRCC-LR theory is also applicable to systems with open-shell ground-state wavefunctions and is by construction not biased towards a particular reference determinant. We have also compared the linear-response approach to the computation of energy differences by direct state-specific ic-MRCC calculations. We finally compare to Mk-MRCC-LR theory for which spurious roots have been reported [T.-C. Jagau and J. Gauss, J. Chem. Phys. 137, 044116 (2012)], being due to the use of sufficiency conditions to solve the Mk-MRCC equations. No such problem is present in ic-MRCC-LR theory.

  17. Analysis of Particle Image Velocimetry (PIV) Data for Acoustic Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    Acoustic velocity measurements were taken using Particle Image Velocimetry (PIV) in a Normal Incidence Tube configuration at various frequency, phase, and amplitude levels. This report presents the results of the PIV analysis and data reduction portions of the test and details the processing that was done. Estimates of lower measurement sensitivity levels were determined based on PIV image quality, correlation, and noise level parameters used in the test. Comparison of measurements with linear acoustic theory are presented. The onset of nonlinear, harmonic frequency acoustic levels were also studied for various decibel and frequency levels ranging from 90 to 132 dB and 500 to 3000 Hz, respectively.

  18. Regioselective electrochemical reduction of 2,4-dichlorobiphenyl - Distinct standard reduction potentials for carbon-chlorine bonds using convolution potential sweep voltammetry

    NASA Astrophysics Data System (ADS)

    Muthukrishnan, A.; Sangaranarayanan, M. V.; Boyarskiy, V. P.; Boyarskaya, I. A.

    2010-04-01

    The reductive cleavage of carbon-chlorine bonds in 2,4-dichlorobiphenyl (PCB-7) is investigated using the convolution potential sweep voltammetry and quantum chemical calculations. The potential dependence of the logarithmic rate constant is non-linear which indicates the validity of Marcus-Hush theory of quadratic activation-driving force relationship. The ortho-chlorine of the 2,4-dichlorobiphenyl gets reduced first as inferred from the quantum chemical calculations and bulk electrolysis. The standard reduction potentials pertaining to the ortho-chlorine of 2,4-dichlorobiphenyl and that corresponding to para chlorine of the 4-chlorobiphenyl have been estimated.

  19. A linear quadratic regulator approach to the stabilization of uncertain linear systems

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.; Sunkel, J. W.; Wang, Y. J.

    1990-01-01

    This paper presents a linear quadratic regulator approach to the stabilization of uncertain linear systems. The uncertain systems under consideration are described by state equations with the presence of time-varying unknown-but-bounded uncertainty matrices. The method is based on linear quadratic regulator (LQR) theory and Liapunov stability theory. The robust stabilizing control law for a given uncertain system can be easily constructed from the symmetric positive-definite solution of the associated augmented Riccati equation. The proposed approach can be applied to matched and/or mismatched systems with uncertainty matrices in which only their matrix norms are bounded by some prescribed values and/or their entries are bounded by some prescribed constraint sets. Several numerical examples are presented to illustrate the results.

  20. Manipulator control by exact linearization

    NASA Technical Reports Server (NTRS)

    Kruetz, K.

    1987-01-01

    Comments on the application to rigid link manipulators of geometric control theory, resolved acceleration control, operational space control, and nonlinear decoupling theory are given, and the essential unity of these techniques for externally linearizing and decoupling end effector dynamics is discussed. Exploiting the fact that the mass matrix of a rigid link manipulator is positive definite, a consequence of rigid link manipulators belonging to the class of natural physical systems, it is shown that a necessary and sufficient condition for a locally externally linearizing and output decoupling feedback law to exist is that the end effector Jacobian matrix be nonsingular. Furthermore, this linearizing feedback is easy to produce.

  1. On the impact of relatedness on SNP association analysis.

    PubMed

    Gross, Arnd; Tönjes, Anke; Scholz, Markus

    2017-12-06

    When testing for SNP (single nucleotide polymorphism) associations in related individuals, observations are not independent. Simple linear regression assuming independent normally distributed residuals results in an increased type I error and the power of the test is also affected in a more complicate manner. Inflation of type I error is often successfully corrected by genomic control. However, this reduces the power of the test when relatedness is of concern. In the present paper, we derive explicit formulae to investigate how heritability and strength of relatedness contribute to variance inflation of the effect estimate of the linear model. Further, we study the consequences of variance inflation on hypothesis testing and compare the results with those of genomic control correction. We apply the developed theory to the publicly available HapMap trio data (N=129), the Sorbs (a self-contained population with N=977 characterised by a cryptic relatedness structure) and synthetic family studies with different sample sizes (ranging from N=129 to N=999) and different degrees of relatedness. We derive explicit and easily to apply approximation formulae to estimate the impact of relatedness on the variance of the effect estimate of the linear regression model. Variance inflation increases with increasing heritability. Relatedness structure also impacts the degree of variance inflation as shown for example family structures. Variance inflation is smallest for HapMap trios, followed by a synthetic family study corresponding to the trio data but with larger sample size than HapMap. Next strongest inflation is observed for the Sorbs, and finally, for a synthetic family study with a more extreme relatedness structure but with similar sample size as the Sorbs. Type I error increases rapidly with increasing inflation. However, for smaller significance levels, power increases with increasing inflation while the opposite holds for larger significance levels. When genomic control is applied, type I error is preserved while power decreases rapidly with increasing variance inflation. Stronger relatedness as well as higher heritability result in increased variance of the effect estimate of simple linear regression analysis. While type I error rates are generally inflated, the behaviour of power is more complex since power can be increased or reduced in dependence on relatedness and the heritability of the phenotype. Genomic control cannot be recommended to deal with inflation due to relatedness. Although it preserves type I error, the loss in power can be considerable. We provide a simple formula for estimating variance inflation given the relatedness structure and the heritability of a trait of interest. As a rule of thumb, variance inflation below 1.05 does not require correction and simple linear regression analysis is still appropriate.

  2. A Comparison of Measurement Equivalence Methods Based on Confirmatory Factor Analysis and Item Response Theory.

    ERIC Educational Resources Information Center

    Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.

    Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…

  3. Constructive Processes in Linear Order Problems Revealed by Sentence Study Times

    ERIC Educational Resources Information Center

    Mynatt, Barbee T.; Smith, Kirk H.

    1977-01-01

    This research was a further test of the theory of constructive processes proposed by Foos, Smith, Sabol, and Mynatt (1976) to account for differences among presentation orders in the construction of linear orders. This theory is composed of different series of mental operations that must be performed when an order relationship is integrated with…

  4. The application of Green's theorem to the solution of boundary-value problems in linearized supersonic wing theory

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Lomax, Harvard

    1950-01-01

    Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.

  5. Non-linear Frequency Shifts, Mode Couplings, and Decay Instability of Plasma Waves

    NASA Astrophysics Data System (ADS)

    Affolter, Mathew; Anderegg, F.; Driscoll, C. F.; Valentini, F.

    2015-11-01

    We present experiments and theory for non-linear plasma wave decay to longer wavelengths, in both the oscillatory coupling and exponential decay regimes. The experiments are conducted on non-neutral plasmas in cylindrical Penning-Malmberg traps, θ-symmetric standing plasma waves have near acoustic dispersion ω (kz) ~kz - αkz2 , discretized by kz =mz (π /Lp) . Large amplitude waves exhibit non-linear frequency shifts δf / f ~A2 and Fourier harmonic content, both of which are increased as the plasma dispersion is reduced. Non-linear coupling rates are measured between large amplitude mz = 2 waves and small amplitude mz = 1 waves, which have a small detuning Δω = 2ω1 -ω2 . At small excitation amplitudes, this detuning causes the mz = 1 mode amplitude to ``bounce'' at rate Δω , with amplitude excursions ΔA1 ~ δn2 /n0 consistent with cold fluid theory and Vlasov simulations. At larger excitation amplitudes, where the non-linear coupling exceeds the dispersion, phase-locked exponential growth of the mz = 1 mode is observed, in qualitative agreement with simple 3-wave instability theory. However, significant variations are observed experimentally, and N-wave theory gives stunningly divergent predictions that depend sensitively on the dispersion-moderated harmonic content. Measurements on higher temperature Langmuir waves and the unusual ``EAW'' (KEEN) waves are being conducted to investigate the effects of wave-particle kinetics on the non-linear coupling rates. Department of Energy Grants DE-SC0002451and DE-SC0008693.

  6. On the design of classifiers for crop inventories

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Takacs, H. C.

    1986-01-01

    Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.

  7. Cost Estimation of Naval Ship Acquisition.

    DTIC Science & Technology

    1983-12-01

    one a 9-sub- system model , the other a single total cost model . The models were developed using the linear least squares regression tech- nique with...to Linear Statistical Models , McGraw-Hill, 1961. 11. Helmer, F. T., Bibliography on Pricing Methodology and Cost Estimating, Dept. of Economics and...SUPPI.EMSaTARY NOTES IS. KWRo" (Cowaft. en tever aide of ..aesep M’ Idab~t 6 Week ONNa.) Cost estimation; Acquisition; Parametric cost estimate; linear

  8. Long-term variations of the upper atmosphere parameters on Rome ionosonde observations and their interpretation

    NASA Astrophysics Data System (ADS)

    Perrone, Loredana; Mikhailov, Andrey; Cesaroni, Claudio; Alfonsi, Lucilla; Santis, Angelo De; Pezzopane, Michael; Scotto, Carlo

    2017-09-01

    A recently proposed self-consistent approach to the analysis of thermospheric and ionospheric long-term trends has been applied to Rome ionosonde summer noontime observations for the (1957-2015) period. This approach includes: (i) a method to extract ionospheric parameter long-term variations; (ii) a method to retrieve from observed foF1 neutral composition (O, O2, N2), exospheric temperature, Tex and the total solar EUV flux with λ < 1050 Å; and (iii) a combined analysis of the ionospheric and thermospheric parameter long-term variations using the theory of ionospheric F-layer formation. Atomic oxygen, [O] and [O]/[N2] ratio control foF1 and foF2 while neutral temperature, Tex controls hmF2 long-term variations. Noontime foF2 and foF1 long-term variations demonstrate a negative linear trend estimated over the (1962-2010) period which is mainly due to atomic oxygen decrease after ˜1990. A linear trend in (δhmF2)11y estimated over the (1962-2010) period is very small and insignificant reflecting the absence of any significant trend in neutral temperature. The retrieved neutral gas density, ρ atomic oxygen, [O] and exospheric temperature, Tex long-term variations are controlled by solar and geomagnetic activity, i.e. they have a natural origin. The residual trends estimated over the period of ˜5 solar cycles (1957-2015) are very small (<0.5% per decade) and statistically insignificant.

  9. Modelling hydrological extremes under non-stationary conditions using climate covariates

    NASA Astrophysics Data System (ADS)

    Vasiliades, Lampros; Galiatsatou, Panagiota; Loukas, Athanasios

    2013-04-01

    Extreme value theory is a probabilistic theory that can interpret the future probabilities of occurrence of extreme events (e.g. extreme precipitation and streamflow) using past observed records. Traditionally, extreme value theory requires the assumption of temporal stationarity. This assumption implies that the historical patterns of recurrence of extreme events are static over time. However, the hydroclimatic system is nonstationary on time scales that are relevant to extreme value analysis, due to human-mediated and natural environmental change. In this study the generalized extreme value (GEV) distribution is used to assess nonstationarity in annual maximum daily rainfall and streamflow timeseries at selected meteorological and hydrometric stations in Greece and Cyprus. The GEV distribution parameters (location, scale, and shape) are specified as functions of time-varying covariates and estimated using the conditional density network (CDN) as proposed by Cannon (2010). The CDN is a probabilistic extension of the multilayer perceptron neural network. Model parameters are estimated via the generalized maximum likelihood (GML) approach using the quasi-Newton BFGS optimization algorithm, and the appropriate GEV-CDN model architecture for the selected meteorological and hydrometric stations is selected by fitting increasingly complicated models and choosing the one that minimizes the Akaike information criterion with small sample size correction. For all case studies in Greece and Cyprus different formulations are tested with combinational cases of stationary and nonstationary parameters of the GEV distribution, linear and non-linear architecture of the CDN and combinations of the input climatic covariates. Climatic indices such as the Southern Oscillation Index (SOI), which describes atmospheric circulation in the eastern tropical pacific related to El Niño Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO) index that varies on an interdecadal rather than interannual time scale and the atmospheric circulation patterns as expressed by the North Atlantic Oscillation (NAO) index are used to express the GEV parameters as functions of the covariates. Results show that the nonstationary GEV model can be an efficient tool to take into account the dependencies between extreme value random variables and the temporal evolution of the climate.

  10. Linear-scaling method for calculating nuclear magnetic resonance chemical shifts using gauge-including atomic orbitals within Hartree-Fock and density-functional theory.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-08-07

    Details of a new density matrix-based formulation for calculating nuclear magnetic resonance chemical shifts at both Hartree-Fock and density functional theory levels are presented. For systems with a nonvanishing highest occupied molecular orbital-lowest unoccupied molecular orbital gap, the method allows us to reduce the asymptotic scaling order of the computational effort from cubic to linear, so that molecular systems with 1000 and more atoms can be tackled with today's computers. The key feature is a reformulation of the coupled-perturbed self-consistent field (CPSCF) theory in terms of the one-particle density matrix (D-CPSCF), which avoids entirely the use of canonical MOs. By means of a direct solution for the required perturbed density matrices and the adaptation of linear-scaling integral contraction schemes, the overall scaling of the computational effort is reduced to linear. A particular focus of our formulation is to ensure numerical stability when sparse-algebra routines are used to obtain an overall linear-scaling behavior.

  11. Linear {GLP}-algebras and their elementary theories

    NASA Astrophysics Data System (ADS)

    Pakhomov, F. N.

    2016-12-01

    The polymodal provability logic {GLP} was introduced by Japaridze in 1986. It is the provability logic of certain chains of provability predicates of increasing strength. Every polymodal logic corresponds to a variety of polymodal algebras. Beklemishev and Visser asked whether the elementary theory of the free {GLP}-algebra generated by the constants \\mathbf{0}, \\mathbf{1} is decidable [1]. For every positive integer n we solve the corresponding question for the logics {GLP}_n that are the fragments of {GLP} with n modalities. We prove that the elementary theory of the free {GLP}_n-algebra generated by the constants \\mathbf{0}, \\mathbf{1} is decidable for all n. We introduce the notion of a linear {GLP}_n-algebra and prove that all free {GLP}_n-algebras generated by the constants \\mathbf{0}, \\mathbf{1} are linear. We also consider the more general case of the logics {GLP}_α whose modalities are indexed by the elements of a linearly ordered set α: we define the notion of a linear algebra and prove the latter result in this case.

  12. Linearized-moment analysis of the temperature jump and temperature defect in the Knudsen layer of a rarefied gas.

    PubMed

    Gu, Xiao-Jun; Emerson, David R

    2014-06-01

    Understanding the thermal behavior of a rarefied gas remains a fundamental problem. In the present study, we investigate the predictive capabilities of the regularized 13 and 26 moment equations. In this paper, we consider low-speed problems with small gradients, and to simplify the analysis, a linearized set of moment equations is derived to explore a classic temperature problem. Analytical solutions obtained for the linearized 26 moment equations are compared with available kinetic models and can reliably capture all qualitative trends for the temperature-jump coefficient and the associated temperature defect in the thermal Knudsen layer. In contrast, the linearized 13 moment equations lack the necessary physics to capture these effects and consistently underpredict kinetic theory. The deviation from kinetic theory for the 13 moment equations increases significantly for specular reflection of gas molecules, whereas the 26 moment equations compare well with results from kinetic theory. To improve engineering analyses, expressions for the effective thermal conductivity and Prandtl number in the Knudsen layer are derived with the linearized 26 moment equations.

  13. An ultrasound-guided fluorescence tomography system: design and specification

    NASA Astrophysics Data System (ADS)

    D'Souza, Alisha V.; Flynn, Brendan P.; Kanick, Stephen C.; Torosean, Sason; Davis, Scott C.; Maytin, Edward V.; Hasan, Tayyaba; Pogue, Brian W.

    2013-03-01

    An ultrasound-guided fluorescence molecular tomography system is under development for in vivo quantification of Protoporphyrin IX (PpIX) during Aminolevulinic Acid - Photodynamic Therapy (ALA-PDT) of Basal Cell Carcinoma. The system is designed to combine fiber-based spectral sampling of PPIX fluorescence emission with co-registered ultrasound images to quantify local fluorophore concentration. A single white light source is used to provide an estimate of the bulk optical properties of tissue. Optical data is obtained by sequential illumination of a 633nm laser source at 4 linear locations with parallel detection at 5 locations interspersed between the sources. Tissue regions from segmented ultrasound images, optical boundary data, white light-informed optical properties and diffusion theory are used to estimate the fluorophore concentration in these regions. Our system and methods allow interrogation of both superficial and deep tissue locations up to PpIX concentrations of 0.025ug/ml.

  14. A joint tracking method for NSCC based on WLS algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Ruidan; Xu, Ying; Yuan, Hong

    2017-12-01

    Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.

  15. Tidal capture of stars by a massive black hole

    NASA Technical Reports Server (NTRS)

    Novikov, I. D.; Pethick, C. J.; Polnarev, A. G.

    1992-01-01

    The processes leading to tidal capture of stars by a massive black hole and the consequences of these processes in a dense stellar cluster are discussed in detail. When the amplitude of a tide and the subsequent oscillations are sufficiently large, the energy deposited in a star after periastron passage and formation of a bound orbit cannot be estimated directly using the linear theory of oscillations of a spherical star, but rather numerical estimates must be used. The evolution of a star after tidal capture is discussed. The maximum ratio R of the cross-section for tidal capture to that for tidal disruption is about 3 for real systems. For the case of a stellar system with an empty capture loss cone, even in the case when the impact parameter for tidal capture only slightly exceeds the impact parameter for direct tidal disruption, tidal capture would be much more important than tidal disruption.

  16. Inverse obstacle problem for the scalar Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Crosta, Giovanni F.

    1994-07-01

    The method presented is aimed at identifying the shape of an axially symmetric, sound soft acoustic scatterer from knowledge of the incident plane wave and of the scattering amplitude. The method relies on the approximate back propagation (ABP) of the estimated far field coefficients to the obstacle boundary and iteratively minimizes a boundary defect, without the addition of any penalty term. The ABP operator owes its structure to the properties of complete families of linearly independent solutions of Helmholtz equation. If the obstacle is known, as it happens in simulations, the theory also provides some independent means of predicting the performance of the ABP method. The ABP algorithm and the related computer code are outlined. Several reconstruction examples are considered, where noise is added to the estimated far field coefficients and other errors are deliberately introduced in the data. Many numerical and graphical results are provided.

  17. Accelerated Path-following Iterative Shrinkage Thresholding Algorithm with Application to Semiparametric Graph Estimation

    PubMed Central

    Zhao, Tuo; Liu, Han

    2016-01-01

    We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430

  18. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  19. Cosmic-ray streaming perpendicular to the mean magnetic field. II - The gyrophase distribution function

    NASA Technical Reports Server (NTRS)

    Forman, M. A.; Jokipii, J. R.

    1978-01-01

    The distribution function of cosmic rays streaming perpendicular to the mean magnetic field in a turbulent medium is reexamined. Urch's (1977) discovery that in quasi-linear theory, the flux is due to particles at 90 deg pitch angle is discussed and shown to be consistent with previous formulations of the theory. It is pointed out that this flux of particles at 90 deg cannot be arbitrarily set equal to zero, and hence the alternative theory which proceeds from this premise is dismissed. A further, basic inconsistency in Urch's transport equation is demonstrated, and the connection between quasi-linear theory and compound diffusion is discussed.

  20. Linear-scaling implementation of molecular response theory in self-consistent field electronic-structure theory.

    PubMed

    Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł

    2007-04-21

    A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.

  1. Optimal joint detection and estimation that maximizes ROC-type curves

    PubMed Central

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.

    2017-01-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544

  2. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    PubMed

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  3. Molecular Static Third-Order Polarizabilities of Carbon-Cage Fullerene and Their Correlation with Three Geometric Properties: Symmetry, Aromaticity, and Size

    NASA Technical Reports Server (NTRS)

    Moore, C. E.; Cardelino, B. H.; Frazier, D. O.; Niles, J.; Wang, X.-Q.

    1998-01-01

    The static third-order polarizabilities (gamma) of C60, C70, five isomers of C78 and two isomers of C84 were analyzed in terms of three properties, from a geometric point of view: symmetry, aromaticity and size. The polarizability values were based on the finite field approximation using a semiempirical Hamiltonian (AM1) and applied to molecular structures obtained from density functional theory calculations. Symmetry was characterized by the molecular group order. The selection of 6-member rings as aromatic was determined from an analysis of bond lengths. Maximum interatomic distance and surface area were the parameters considered with respect to size. Based on triple linear regression analysis, it was found that the static linear polarizability (alpha) and gamma in these molecules respond differently to geometrical properties: alpha depends almost exclusively on surface area while gamma is affected by a combination of number of aromatic rings, length and group order, in decreasing importance. In the case of alpha, valence electron contributions provide the same information as all-electron estimates. For gamma, the best correlation coefficients are obtained when all-electron estimates are used and when the dependent parameter is ln(gamma) instead of gamma.

  4. Couple stress theory of curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-01-01

    New models for plane curved rods based on linear couple stress theory of elasticity have been developed.2-D theory is developed from general 2-D equations of linear couple stress elasticity using a special curvilinear system of coordinates related to the middle line of the rod as well as special hypothesis based on assumptions that take into account the fact that the rod is thin. High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First, stress and strain tensors, vectors of displacements and rotation along with body forces have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate.Thereby, all equations of elasticity including Hooke's law have been transformed to the corresponding equations for Fourier coefficients. Then, in the same way as in the theory of elasticity, a system of differential equations in terms of displacements and boundary conditions for Fourier coefficients have been obtained. Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and the 2-D equations of linear couple stress theory of elasticity in a special curvilinear system. The obtained equations can be used to calculate stress-strain and to model thin walled structures in macro, micro and nano scales when taking into account couple stress and rotation effects.

  5. Chaos in World Politics: A Reflection

    NASA Astrophysics Data System (ADS)

    Ferreira, Manuel Alberto Martins; Filipe, José António Candeias Bonito; Coelho, Manuel F. P.; Pedro, Isabel C.

    Chaos theory results from natural scientists' findings in the area of non-linear dynamics. The importance of related models has increased in the last decades, by studying the temporal evolution of non-linear systems. In consequence, chaos is one of the concepts that most rapidly have been expanded in what research topics respects. Considering that relationships in non-linear systems are unstable, chaos theory aims to understand and to explain this kind of unpredictable aspects of nature, social life, the uncertainties, the nonlinearities, the disorders and confusion, scientifically it represents a disarray connection, but basically it involves much more than that. The existing close relationship between change and time seems essential to understand what happens in the basics of chaos theory. In fact, this theory got a crucial role in the explanation of many phenomena. The relevance of this kind of theories has been well recognized to explain social phenomena and has permitted new advances in the study of social systems. Chaos theory has also been applied, particularly in the context of politics, in this area. The goal of this chapter is to make a reflection on chaos theory - and dynamical systems such as the theories of complexity - in terms of the interpretation of political issues, considering some kind of events in the political context and also considering the macro-strategic ideas of states positioning in the international stage.

  6. A Lagrangian effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less

  7. A Lagrangian effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro, E-mail: zvlah@stanford.edu, E-mail: mwhite@berkeley.edu, E-mail: aviles@berkeley.edu

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The 'new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. All the perturbative models fare better than linear theory.« less

  8. A Lagrangian effective field theory

    DOE PAGES

    Vlah, Zvonimir; White, Martin; Aviles, Alejandro

    2015-09-02

    We have continued the development of Lagrangian, cosmological perturbation theory for the low-order correlators of the matter density field. We provide a new route to understanding how the effective field theory (EFT) of large-scale structure can be formulated in the Lagrandian framework and a new resummation scheme, comparing our results to earlier work and to a series of high-resolution N-body simulations in both Fourier and configuration space. The `new' terms arising from EFT serve to tame the dependence of perturbation theory on small-scale physics and improve agreement with simulations (though with an additional free parameter). We find that all ofmore » our models fare well on scales larger than about two to three times the non-linear scale, but fail as the non-linear scale is approached. This is slightly less reach than has been seen previously. At low redshift the Lagrangian model fares as well as EFT in its Eulerian formulation, but at higher z the Eulerian EFT fits the data to smaller scales than resummed, Lagrangian EFT. Furthermore, all the perturbative models fare better than linear theory.« less

  9. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  10. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  11. Geometric Theory of Reduction of Nonlinear Control Systems

    NASA Astrophysics Data System (ADS)

    Elkin, V. I.

    2018-02-01

    The foundations of a differential geometric theory of nonlinear control systems are described on the basis of categorical concepts (isomorphism, factorization, restrictions) by analogy with classical mathematical theories (of linear spaces, groups, etc.).

  12. Development as a Complex Process of Change: Conception and Analysis of Projects, Programs and Policies

    ERIC Educational Resources Information Center

    Nordtveit, Bjorn Harald

    2010-01-01

    Development is often understood as a linear process of change towards Western modernity, a vision that is challenged by this paper, arguing that development efforts should rather be connected to the local stakeholders' sense of their own development. Further, the paper contends that Complexity Theory is more effective than a linear theory of…

  13. The generic world-sheet action of irrational conformal field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clubok, K.; Halpern, M.B.

    1995-05-01

    We review developments in the world-sheet action formulation of the generic irrational conformal field theory, including the non-linear and the linearized forms of the action. These systems form a large class of spin-two gauged WZW actions which exhibit exotic gravitational couplings. Integrating out the gravitational field, we also speculate on a connection with sigma models.

  14. Linear dispersion relation for the mirror instability in context of the gyrokinetic theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porazik, Peter; Johnson, Jay R.

    2013-10-15

    The linear dispersion relation for the mirror instability is discussed in context of the gyrokinetic theory. The objective is to provide a coherent view of different kinetic approaches used to derive the dispersion relation. The method based on gyrocenter phase space transformations is adopted in order to display the origin and ordering of various terms.

  15. Incompressible boundary-layer stability analysis of LFC experimental data for sub-critical Mach numbers. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Berry, S. A.

    1986-01-01

    An incompressible boundary-layer stability analysis of Laminar Flow Control (LFC) experimental data was completed and the results are presented. This analysis was undertaken for three reasons: to study laminar boundary-layer stability on a modern swept LFC airfoil; to calculate incompressible design limits of linear stability theory as applied to a modern airfoil at high subsonic speeds; and to verify the use of linear stability theory as a design tool. The experimental data were taken from the slotted LFC experiment recently completed in the NASA Langley 8-Foot Transonic Pressure Tunnel. Linear stability theory was applied and the results were compared with transition data to arrive at correlated n-factors. Results of the analysis showed that for the configuration and cases studied, Tollmien-Schlichting (TS) amplification was the dominating disturbance influencing transition. For these cases, incompressible linear stability theory correlated with an n-factor for TS waves of approximately 10 at transition. The n-factor method correlated rather consistently to this value despite a number of non-ideal conditions which indicates the method is useful as a design tool for advanced laminar flow airfoils.

  16. Computation of Nonlinear Hydrodynamic Loads on Floating Wind Turbines Using Fluid-Impulse Theory: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kok Yan Chan, G.; Sclavounos, P. D.; Jonkman, J.

    2015-04-02

    A hydrodynamics computer module was developed for the evaluation of the linear and nonlinear loads on floating wind turbines using a new fluid-impulse formulation for coupling with the FAST program. The recently developed formulation allows the computation of linear and nonlinear loads on floating bodies in the time domain and avoids the computationally intensive evaluation of temporal and nonlinear free-surface problems and efficient methods are derived for its computation. The body instantaneous wetted surface is approximated by a panel mesh and the discretization of the free surface is circumvented by using the Green function. The evaluation of the nonlinear loadsmore » is based on explicit expressions derived by the fluid-impulse theory, which can be computed efficiently. Computations are presented of the linear and nonlinear loads on the MIT/NREL tension-leg platform. Comparisons were carried out with frequency-domain linear and second-order methods. Emphasis was placed on modeling accuracy of the magnitude of nonlinear low- and high-frequency wave loads in a sea state. Although fluid-impulse theory is applied to floating wind turbines in this paper, the theory is applicable to other offshore platforms as well.« less

  17. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    NASA Astrophysics Data System (ADS)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  18. Regularization of Instantaneous Frequency Attribute Computations

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  19. Weibull Modulus Estimated by the Non-linear Least Squares Method: A Solution to Deviation Occurring in Traditional Weibull Estimation

    NASA Astrophysics Data System (ADS)

    Li, T.; Griffiths, W. D.; Chen, J.

    2017-11-01

    The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.

  20. Collective effect of personal behavior induced preventive measures and differential rate of transmission on spread of epidemics

    NASA Astrophysics Data System (ADS)

    Sagar, Vikram; Zhao, Yi

    2017-02-01

    In the present work, the effect of personal behavior induced preventive measures is studied on the spread of epidemics over scale free networks that are characterized by the differential rate of disease transmission. The role of personal behavior induced preventive measures is parameterized in terms of variable λ, which modulates the number of concurrent contacts a node makes with the fraction of its neighboring nodes. The dynamics of the disease is described by a non-linear Susceptible Infected Susceptible model based upon the discrete time Markov Chain method. The network mean field approach is generalized to account for the effect of non-linear coupling between the aforementioned factors on the collective dynamics of nodes. The upper bound estimates of the disease outbreak threshold obtained from the mean field theory are found to be in good agreement with the corresponding non-linear stochastic model. From the results of parametric study, it is shown that the epidemic size has inverse dependence on the preventive measures (λ). It has also been shown that the increase in the average degree of the nodes lowers the time of spread and enhances the size of epidemics.

  1. Incorporating nonlinearity into mediation analyses.

    PubMed

    Knafl, George J; Knafl, Kathleen A; Grey, Margaret; Dixon, Jane; Deatrick, Janet A; Gallo, Agatha M

    2017-03-21

    Mediation is an important issue considered in the behavioral, medical, and social sciences. It addresses situations where the effect of a predictor variable X on an outcome variable Y is explained to some extent by an intervening, mediator variable M. Methods for addressing mediation have been available for some time. While these methods continue to undergo refinement, the relationships underlying mediation are commonly treated as linear in the outcome Y, the predictor X, and the mediator M. These relationships, however, can be nonlinear. Methods are needed for assessing when mediation relationships can be treated as linear and for estimating them when they are nonlinear. Existing adaptive regression methods based on fractional polynomials are extended here to address nonlinearity in mediation relationships, but assuming those relationships are monotonic as would be consistent with theories about directionality of such relationships. Example monotonic mediation analyses are provided assessing linear and monotonic mediation of the effect of family functioning (X) on a child's adaptation (Y) to a chronic condition by the difficulty (M) for the family in managing the child's condition. Example moderated monotonic mediation and simulation analyses are also presented. Adaptive methods provide an effective way to incorporate possibly nonlinear monotonicity into mediation relationships.

  2. A new polytopic approach for the unknown input functional observer design

    NASA Astrophysics Data System (ADS)

    Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed

    2018-03-01

    In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.

  3. High Resolution, Large Deformation 3D Traction Force Microscopy

    PubMed Central

    López-Fagundo, Cristina; Reichner, Jonathan; Hoffman-Kim, Diane; Franck, Christian

    2014-01-01

    Traction Force Microscopy (TFM) is a powerful approach for quantifying cell-material interactions that over the last two decades has contributed significantly to our understanding of cellular mechanosensing and mechanotransduction. In addition, recent advances in three-dimensional (3D) imaging and traction force analysis (3D TFM) have highlighted the significance of the third dimension in influencing various cellular processes. Yet irrespective of dimensionality, almost all TFM approaches have relied on a linear elastic theory framework to calculate cell surface tractions. Here we present a new high resolution 3D TFM algorithm which utilizes a large deformation formulation to quantify cellular displacement fields with unprecedented resolution. The results feature some of the first experimental evidence that cells are indeed capable of exerting large material deformations, which require the formulation of a new theoretical TFM framework to accurately calculate the traction forces. Based on our previous 3D TFM technique, we reformulate our approach to accurately account for large material deformation and quantitatively contrast and compare both linear and large deformation frameworks as a function of the applied cell deformation. Particular attention is paid in estimating the accuracy penalty associated with utilizing a traditional linear elastic approach in the presence of large deformation gradients. PMID:24740435

  4. The influence of linear elements on plant species diversity of Mediterranean rural landscapes: assessment of different indices and statistical approaches.

    PubMed

    García del Barrio, J M; Ortega, M; Vázquez De la Cueva, A; Elena-Rosselló, R

    2006-08-01

    This paper mainly aims to study the linear element influence on the estimation of vascular plant species diversity in five Mediterranean landscapes modeled as land cover patch mosaics. These landscapes have several core habitats and a different set of linear elements--habitat edges or ecotones, roads or railways, rivers, streams and hedgerows on farm land--whose plant composition were examined. Secondly, it aims to check plant diversity estimation in Mediterranean landscapes using parametric and non-parametric procedures, with two indices: Species richness and Shannon index. Land cover types and landscape linear elements were identified from aerial photographs. Their spatial information was processed using GIS techniques. Field plots were selected using a stratified sampling design according to relieve and tree density of each habitat type. A 50x20 m2 multi-scale sampling plot was designed for the core habitats and across the main landscape linear elements. Richness and diversity of plant species were estimated by comparing the observed field data to ICE (Incidence-based Coverage Estimator) and ACE (Abundance-based Coverage Estimator) non-parametric estimators. The species density, percentage of unique species, and alpha diversity per plot were significantly higher (p < 0.05) in linear elements than in core habitats. ICE estimate of number of species was 32% higher than of ACE estimate, which did not differ significantly from the observed values. Accumulated species richness in core habitats together with linear elements, were significantly higher than those recorded only in the core habitats in all the landscapes. Conversely, Shannon diversity index did not show significant differences.

  5. Robust root clustering for linear uncertain systems using generalized Lyapunov theory

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1993-01-01

    Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.

  6. Reconstruction of real-space linear matter power spectrum from multipoles of BOSS DR12 results

    NASA Astrophysics Data System (ADS)

    Lee, Seokcheon

    2018-02-01

    Recently, the power spectrum (PS) multipoles using the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 (DR12) sample are analyzed [1]. The based model for the analysis is the so-called TNS quasi-linear model and the analysis provides the multipoles up to the hexadecapole [2]. Thus, one might be able to recover the real-space linear matter PS by using the combinations of multipoles to investigate the cosmology [3]. We provide the analytic form of the ratio of quadrupole (hexadecapole) to monopole moments of the quasi-linear PS including the Fingers-of-God (FoG) effect to recover the real-space PS in the linear regime. One expects that observed values of the ratios of multipoles should be consistent with those of the linear theory at large scales. Thus, we compare the ratios of multipoles of the linear theory, including the FoG effect with the measured values. From these, we recover the linear matter power spectra in real-space. These recovered power spectra are consistent with the linear matter power spectra.

  7. Current-wave spectra coupling project. Volume III. Cumulative distribution of forces on structures subjected to the combined action of currents and random waves for potential OTEC sites: (A) Keahole Point, Hawaii, 100 year hurricane; (B) Punta Tuna, Puerto Rico, 100 year hurricane; (C) New Orleans, Louisiana, 100 year hurricane; (D) West Coast of Florida, 100 year hurricane. [CUFOR code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venezian, G.; Bretschneider, C.L.

    1980-08-01

    This volume details a new methodology to analyze statistically the forces experienced by a structure at sea. Conventionally a wave climate is defined using a spectral function. The wave climate is described using a joint distribution of wave heights and periods (wave lengths), characterizing actual sea conditions through some measured or estimated parameters like the significant wave height, maximum spectral density, etc. Random wave heights and periods satisfying the joint distribution are then generated. Wave kinetics are obtained using linear or non-linear theory. In the case of currents a linear wave-current interaction theory of Venezian (1979) is used. The peakmore » force experienced by the structure for each individual wave is identified. Finally, the probability of exceedance of any given peak force on the structure may be obtained. A three-parameter Longuet-Higgins type joint distribution of wave heights and periods is discussed in detail. This joint distribution was used to model sea conditions at four potential OTEC locations. A uniform cylindrical pipe of 3 m diameter, extending to a depth of 550 m was used as a sample structure. Wave-current interactions were included and forces computed using Morison's equation. The drag and virtual mass coefficients were interpolated from published data. A Fortran program CUFOR was written to execute the above procedure. Tabulated and graphic results of peak forces experienced by the structure, for each location, are presented. A listing of CUFOR is included. Considerable flexibility of structural definition has been incorporated. The program can easily be modified in the case of an alternative joint distribution or for inclusion of effects like non-linearity of waves, transverse forces and diffraction.« less

  8. Embeddings of the "New Massive Gravity"

    NASA Astrophysics Data System (ADS)

    Dalmazi, D.; Mendonça, E. L.

    2016-07-01

    Here we apply different types of embeddings of the equations of motion of the linearized "New Massive Gravity" in order to generate alternative and even higher-order (in derivatives) massive gravity theories in D=2+1. In the first part of the work we use the Weyl symmetry as a guiding principle for the embeddings. First we show that a Noether gauge embedding of the Weyl symmetry leads to a sixth-order model in derivatives with either a massive or a massless ghost, according to the chosen overall sign of the theory. On the other hand, if the Weyl symmetry is implemented by means of a Stueckelberg field we obtain a new scalar-tensor model for massive gravitons. It is ghost-free and Weyl invariant at the linearized level around Minkowski space. The model can be nonlinearly completed into a scalar field coupled to the NMG theory. The elimination of the scalar field leads to a nonlocal modification of the NMG. In the second part of the work we prove to all orders in derivatives that there is no local, ghost-free embedding of the linearized NMG equations of motion around Minkowski space when written in terms of one symmetric tensor. Regarding that point, NMG differs from the Fierz-Pauli theory, since in the latter case we can replace the Einstein-Hilbert action by specific f(R,Box R) generalizations and still keep the theory ghost-free at the linearized level.

  9. Linear spin-2 fields in most general backgrounds

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael

    2016-04-01

    We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.

  10. Evaluation of Uncertainty in Runoff Analysis Incorporating Theory of Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshimi, Kazuhiro; Wang, Chao-Wen; Yamada, Tadashi

    2015-04-01

    The aim of this paper is to provide a theoretical framework of uncertainty estimate on rainfall-runoff analysis based on theory of stochastic process. SDE (stochastic differential equation) based on this theory has been widely used in the field of mathematical finance due to predict stock price movement. Meanwhile, some researchers in the field of civil engineering have investigated by using this knowledge about SDE (stochastic differential equation) (e.g. Kurino et.al, 1999; Higashino and Kanda, 2001). However, there have been no studies about evaluation of uncertainty in runoff phenomenon based on comparisons between SDE (stochastic differential equation) and Fokker-Planck equation. The Fokker-Planck equation is a partial differential equation that describes the temporal variation of PDF (probability density function), and there is evidence to suggest that SDEs and Fokker-Planck equations are equivalent mathematically. In this paper, therefore, the uncertainty of discharge on the uncertainty of rainfall is explained theoretically and mathematically by introduction of theory of stochastic process. The lumped rainfall-runoff model is represented by SDE (stochastic differential equation) due to describe it as difference formula, because the temporal variation of rainfall is expressed by its average plus deviation, which is approximated by Gaussian distribution. This is attributed to the observed rainfall by rain-gauge station and radar rain-gauge system. As a result, this paper has shown that it is possible to evaluate the uncertainty of discharge by using the relationship between SDE (stochastic differential equation) and Fokker-Planck equation. Moreover, the results of this study show that the uncertainty of discharge increases as rainfall intensity rises and non-linearity about resistance grows strong. These results are clarified by PDFs (probability density function) that satisfy Fokker-Planck equation about discharge. It means the reasonable discharge can be estimated based on the theory of stochastic processes, and it can be applied to the probabilistic risk of flood management.

  11. Statistical methods for the analysis of climate extremes

    NASA Astrophysics Data System (ADS)

    Naveau, Philippe; Nogaj, Marta; Ammann, Caspar; Yiou, Pascal; Cooley, Daniel; Jomelli, Vincent

    2005-08-01

    Currently there is an increasing research activity in the area of climate extremes because they represent a key manifestation of non-linear systems and an enormous impact on economic and social human activities. Our understanding of the mean behavior of climate and its 'normal' variability has been improving significantly during the last decades. In comparison, climate extreme events have been hard to study and even harder to predict because they are, by definition, rare and obey different statistical laws than averages. In this context, the motivation for this paper is twofold. Firstly, we recall the basic principles of Extreme Value Theory that is used on a regular basis in finance and hydrology, but it still does not have the same success in climate studies. More precisely, the theoretical distributions of maxima and large peaks are recalled. The parameters of such distributions are estimated with the maximum likelihood estimation procedure that offers the flexibility to take into account explanatory variables in our analysis. Secondly, we detail three case-studies to show that this theory can provide a solid statistical foundation, specially when assessing the uncertainty associated with extreme events in a wide range of applications linked to the study of our climate. To cite this article: P. Naveau et al., C. R. Geoscience 337 (2005).

  12. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livadiotis, G., E-mail: glivadiotis@swri.edu

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less

  13. Asymmetric fluid criticality. II. Finite-size scaling for simulations.

    PubMed

    Kim, Young C; Fisher, Michael E

    2003-10-01

    The vapor-liquid critical behavior of intrinsically asymmetric fluids is studied in finite systems of linear dimensions L focusing on periodic boundary conditions, as appropriate for simulations. The recently propounded "complete" thermodynamic (L--> infinity) scaling theory incorporating pressure mixing in the scaling fields as well as corrections to scaling [Phys. Rev. E 67, 061506 (2003)] is extended to finite L, initially in a grand canonical representation. The theory allows for a Yang-Yang anomaly in which, when L--> infinity, the second temperature derivative (d2musigma/dT2) of the chemical potential along the phase boundary musigmaT diverges when T-->Tc-. The finite-size behavior of various special critical loci in the temperature-density or (T,rho) plane, in particular, the k-inflection susceptibility loci and the Q-maximal loci--derived from QL(T,L) is identical with 2L/L where m is identical with rho-L--is carefully elucidated and shown to be of value in estimating Tc and rhoc. Concrete illustrations are presented for the hard-core square-well fluid and for the restricted primitive model electrolyte including an estimate of the correlation exponent nu that confirms Ising-type character. The treatment is extended to the canonical representation where further complications appear.

  14. Superposition of Polytropes in the Inner Heliosheath

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2016-03-01

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  15. The large-scale three-point correlation function of the SDSS BOSS DR12 CMASS galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.; Beutler, Florian; Chuang, Chia-Hsun; Cuesta, Antonio J.; Ge, Jian; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; McBride, Cameron K.; Nichol, Robert C.; Percival, Will J.; Rodríguez-Torres, Sergio; Ross, Ashley J.; Scoccimarro, Román; Seo, Hee-Jong; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magaña, Mariana

    2017-06-01

    We report a measurement of the large-scale three-point correlation function of galaxies using the largest data set for this purpose to date, 777 202 luminous red galaxies in the Sloan Digital Sky Survey Baryon Acoustic Oscillation Spectroscopic Survey (SDSS BOSS) DR12 CMASS sample. This work exploits the novel algorithm of Slepian & Eisenstein to compute the multipole moments of the 3PCF in O(N^2) time, with N the number of galaxies. Leading-order perturbation theory models the data well in a compressed basis where one triangle side is integrated out. We also present an accurate and computationally efficient means of estimating the covariance matrix. With these techniques, the redshift-space linear and non-linear bias are measured, with 2.6 per cent precision on the former if σ8 is fixed. The data also indicate a 2.8σ preference for the BAO, confirming the presence of BAO in the three-point function.

  16. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2015-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.

  17. Recent results on output feedback problems

    NASA Technical Reports Server (NTRS)

    Byrnes, C. I.

    1980-01-01

    Given a real linear system sigma = (A, B, C) with m inputs, p outputs and degree n, the problem of generic pole placement by output feedback is studied, which is to compute the constant C(m,p) such that the inequality C(m,p) not less than n is necessary and sufficient for generically positioning the poles of the generic linear system by constant output feedback. A constant C prime (m,p) is determined, which gives a sufficient condition for generic pole placement and which, to the best of the author's knowledge, is at least as good an estimate of C(m,p) as any in the literature. Some results on the construction of solutions in case mp = n are announced, based on the degree formula of Brockett and Byrnes and the Galois theory. In particular, a question raised by Anderson, Bose, and Jury, on the existence of a rational procedure for computing the feedback law from the desired characteristic polynomial is answered.

  18. A study of attitude control concepts for precision-pointing non-rigid spacecraft

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1975-01-01

    Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.

  19. Edge localized mode rotation and the nonlinear dynamics of filaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morales, J. A.; Bécoulet, M.; Garbet, X.

    2016-04-15

    Edge Localized Modes (ELMs) rotating precursors were reported few milliseconds before an ELM crash in several tokamak experiments. Also, the reversal of the filaments rotation at the ELM crash is commonly observed. In this article, we present a mathematical model that reproduces the rotation of the ELM precursors as well as the reversal of the filaments rotation at the ELM crash. Linear ballooning theory is used to establish a formula estimating the rotation velocity of ELM precursors. The linear study together with nonlinear magnetohydrodynamic simulations give an explanation to the rotations observed experimentally. Unstable ballooning modes, localized at the pedestal,more » grow and rotate in the electron diamagnetic direction in the laboratory reference frame. Approaching the ELM crash, this rotation decreases corresponding to the moment when the magnetic reconnection occurs. During the highly nonlinear ELM crash, the ELM filaments are cut from the main plasma due to the strong sheared mean flow that is nonlinearly generated via the Maxwell stress tensor.« less

  20. Fault-tolerant optimised tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology

    NASA Astrophysics Data System (ADS)

    Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong

    2017-10-01

    This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.

Top