Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule
NASA Technical Reports Server (NTRS)
Bay, Stephen D.; Schwabacher, Mark
2003-01-01
Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.
Analytical Studies on the Synchronization of a Network of Linearly-Coupled Simple Chaotic Systems
NASA Astrophysics Data System (ADS)
Sivaganesh, G.; Arulgnanam, A.; Seethalakshmi, A. N.; Selvaraj, S.
2018-05-01
We present explicit generalized analytical solutions for a network of linearly-coupled simple chaotic systems. Analytical solutions are obtained for the normalized state equations of a network of linearly-coupled systems driven by a common chaotic drive system. Two parameter bifurcation diagrams revealing the various hidden synchronization regions, such as complete, phase and phase-lag synchronization are identified using the analytical results. The synchronization dynamics and their stability are studied using phase portraits and the master stability function, respectively. Further, experimental results for linearly-coupled simple chaotic systems are presented to confirm the analytical results. The synchronization dynamics of a network of chaotic systems studied analytically is reported for the first time.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
Consensus Algorithms for Networks of Systems with Second- and Higher-Order Dynamics
NASA Astrophysics Data System (ADS)
Fruhnert, Michael
This thesis considers homogeneous networks of linear systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilizable. We show that, in continuous-time, consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. For networks of continuous-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback. For networks of discrete-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Schur. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. We show that consensus can always be achieved for marginally stable systems and discretized systems. Simple conditions for consensus achieving controllers are obtained when the Laplacian eigenvalues are all real. For networks of continuous-time time-variant higher-order systems, we show that uniform consensus can always be achieved if systems are quadratically stabilizable. In this case, we provide a simple condition to obtain a linear feedback control. For networks of discrete-time higher-order systems, we show that constant gains can be chosen such that consensus is achieved for a variety of network topologies. First, we develop simple results for networks of time-invariant systems and networks of time-variant systems that are given in controllable canonical form. Second, we formulate the problem in terms of Linear Matrix Inequalities (LMIs). The condition found simplifies the design process and avoids the parallel solution of multiple LMIs. The result yields a modified Algebraic Riccati Equation (ARE) for which we present an equivalent LMI condition.
Action Centered Contextual Bandits.
Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan
2017-12-01
Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.
Short relaxation times but long transient times in both simple and complex reaction networks
Henry, Adrien; Martin, Olivier C.
2016-01-01
When relaxation towards an equilibrium or steady state is exponential at large times, one usually considers that the associated relaxation time τ, i.e. the inverse of the decay rate, is the longest characteristic time in the system. However, that need not be true, other times such as the lifetime of an infinitesimal perturbation can be much longer. In the present work, we demonstrate that this paradoxical property can arise even in quite simple systems such as a linear chain of reactions obeying mass action (MA) kinetics. By mathematical analysis of simple reaction networks, we pin-point the reason why the standard relaxation time does not provide relevant information on the potentially long transient times of typical infinitesimal perturbations. Overall, we consider four characteristic times and study their behaviour in both simple linear chains and in more complex reaction networks taken from the publicly available database ‘Biomodels’. In all these systems, whether involving MA rates, Michaelis–Menten reversible kinetics, or phenomenological laws for reaction rates, we find that the characteristic times corresponding to lifetimes of tracers and of concentration perturbations can be significantly longer than τ. PMID:27411726
Linear analysis of auto-organization in Hebbian neural networks.
Carlos Letelier, J; Mpodozis, J
1995-01-01
The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.
2000-01-01
PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H
2017-12-27
In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.
Continuous Quantitative Measurements on a Linear Air Track
ERIC Educational Resources Information Center
Vogel, Eric
1973-01-01
Describes the construction and operational procedures of a spark-timing apparatus which is designed to record the back and forth motion of one or two carts on linear air tracks. Applications to measurements of velocity, acceleration, simple harmonic motion, and collision problems are illustrated. (CC)
Impulse measurement using an Arduíno
NASA Astrophysics Data System (ADS)
Espindola, P. R.; Cena, C. R.; Alves, D. C. B.; Bozano, D. F.; Goncalves, A. M. B.
2018-05-01
In this paper, we propose a simple experimental apparatus that can measure the force variation over time to study the impulse-momentum theorem. In this proposal, a body attached to a rubber string falls freely from rest until it stretches and changes the linear momentum. During that process the force due to the tension on the rubber string is measured with a load cell by using an Arduíno board. We check the instrumental results with the basic concept of impulse, finding the area under the force versus time curve and comparing this with the linear momentum variation estimated from software analysis. The apparatus is presented as a simple and low cost alternative to mechanical physics laboratories.
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
NASA Astrophysics Data System (ADS)
Prathap Reddy, K.
2016-11-01
An ‘electrostatic bathtub potential’ is defined and analytical expressions for the time period and amplitude of charged particles in this potential are obtained and compared with simulations. These kinds of potentials are encountered in linear electrostatic ion traps, where the potential along the axis appears like a bathtub. Ion traps are used in basic physics research and mass spectrometry to store ions; these stored ions make oscillatory motion within the confined volume of the trap. Usually these traps are designed and studied using ion optical software, but in this work the bathtub potential is reproduced by making two simple modifications to the harmonic oscillator potential. The addition of a linear ‘k 1|x|’ potential makes the simple harmonic potential curve steeper with a sharper turn at the origin, while the introduction of a finite-length zero potential region at the centre reproduces the flat region of the bathtub curve. This whole exercise of modelling a practical experimental situation in terms of a well-known simple physics problem may generate interest among readers.
NASA Astrophysics Data System (ADS)
Mädler, Thomas
2013-05-01
Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-06-23
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less
Relationship between the Arctic oscillation and surface air temperature in multi-decadal time-scale
NASA Astrophysics Data System (ADS)
Tanaka, Hiroshi L.; Tamura, Mina
2016-09-01
In this study, a simple energy balance model (EBM) was integrated in time, considering a hypothetical long-term variability in ice-albedo feedback mimicking the observed multi-decadal temperature variability. A natural variability was superimposed on a linear warming trend due to the increasing radiative forcing of CO2. The result demonstrates that the superposition of the natural variability and the background linear trend can offset with each other to show the warming hiatus for some period. It is also stressed that the rapid warming during 1970-2000 can be explained by the superposition of the natural variability and the background linear trend at least within the simple model. The key process of the fluctuating planetary albedo in multi-decadal time scale is investigated using the JRA-55 reanalysis data. It is found that the planetary albedo increased for 1958-1970, decreased for 1970-2000, and increased for 2000-2012, as expected by the simple EBM experiments. The multi-decadal variability in the planetary albedo is compared with the time series of the AO mode and Barents Sea mode of surface air temperature. It is shown that the recent AO negative pattern showing warm Arctic and cold mid-latitudes is in good agreement with planetary albedo change indicating negative anomaly in high latitudes and positive anomaly in mid-latitudes. Moreover, the Barents Sea mode with the warm Barents Sea and cold mid-latitudes shows long-term variability similar to planetary albedo change. Although further studies are needed, the natural variabilities of both the AO mode and Barents Sea mode indicate some possible link to the planetary albedo as suggested by the simple EBM to cause the warming hiatus in recent years.
How Darcy's equation is linked to the linear reservoir at catchment scale
NASA Astrophysics Data System (ADS)
Savenije, Hubert H. G.
2017-04-01
In groundwater hydrology two simple linear equations exist that describe the relation between groundwater flow and the gradient that drives it: Darcy's equation and the linear reservoir. Both equations are empirical at heart: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they show similarity, without having detailed knowledge of the structure of the underlying aquifers it is not trivial to upscale Darcy's equation to the watershed scale. In this paper, a relatively simple connection is provided between the two, based on the assumption that the groundwater system is organized by an efficient drainage network, a mostly invisible pattern that has evolved over geological time scales. This drainage network provides equally distributed resistance to flow along the streamlines that connect the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance.
A Vernacular for Linear Latent Growth Models
ERIC Educational Resources Information Center
Hancock, Gregory R.; Choi, Jaehwa
2006-01-01
In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…
NASA Astrophysics Data System (ADS)
Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui
2014-07-01
The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.
Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui
2014-07-01
The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.
NASA Astrophysics Data System (ADS)
Mechirgui, Monia
The purpose of this project is to implement an optimal control regulator, particularly the linear quadratic regulator in order to control the position of an unmanned aerial vehicle known as a quadrotor. This type of UAV has a symmetrical and simple structure. Thus, its control is relatively easy compared to conventional helicopters. Optimal control can be proven to be an ideal controller to reconcile between the tracking performance and energy consumption. In practice, the linearity requirements are not met, but some elaborations of the linear quadratic regulator have been used in many nonlinear applications with good results. The linear quadratic controller used in this thesis is presented in two forms: simple and adapted to the state of charge of the battery. Based on the traditional structure of the linear quadratic regulator, we introduced a new criterion which relies on the state of charge of the battery, in order to optimize energy consumption. This command is intended to be used to monitor and maintain the desired trajectory during several maneuvers while minimizing energy consumption. Both simple and adapted, linear quadratic controller are implemented in Simulink in discrete time. The model simulates the dynamics and control of a quadrotor. Performance and stability of the system are analyzed with several tests, from the simply hover to the complex trajectories in closed loop.
A simple approach to optimal control of invasive species.
Hastings, Alan; Hall, Richard J; Taylor, Caz M
2006-12-01
The problem of invasive species and their control is one of the most pressing applied issues in ecology today. We developed simple approaches based on linear programming for determining the optimal removal strategies of different stage or age classes for control of invasive species that are still in a density-independent phase of growth. We illustrate the application of this method to the specific example of invasive Spartina alterniflora in Willapa Bay, WA. For all such systems, linear programming shows in general that the optimal strategy in any time step is to prioritize removal of a single age or stage class. The optimal strategy adjusts which class is the focus of control through time and can be much more cost effective than prioritizing removal of the same stage class each year.
Making chaotic behavior in a damped linear harmonic oscillator
NASA Astrophysics Data System (ADS)
Konishi, Keiji
2001-06-01
The present Letter proposes a simple control method which makes chaotic behavior in a damped linear harmonic oscillator. This method is a modified scheme proposed in paper by Wang and Chen (IEEE CAS-I 47 (2000) 410) which presents an anti-control method for making chaotic behavior in discrete-time linear systems. We provide a systematic procedure to design parameters and sampling period of a feedback controller. Furthermore, we show that our method works well on numerical simulations.
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
An accelerating precursor to predict "time-to-failure" in creep and volcanic eruptions
NASA Astrophysics Data System (ADS)
Hao, Shengwang; Yang, Hang; Elsworth, Derek
2017-09-01
Real-time prediction by monitoring of the evolution of response variables is a central goal in predicting rock failure. A linear relation Ω˙Ω¨-1 = C(tf - t) has been developed to describe the time to failure, where Ω represents a response quantity, C is a constant and tf represents the failure time. Observations from laboratory creep failure experiments and precursors to volcanic eruptions are used to test the validity of the approach. Both cumulative and simple moving window techniques are developed to perform predictions and to illustrate the effects of data selection on the results. Laboratory creep failure experiments on granites show that the linear relation works well during the final approach to failure. For blind prediction, the simple moving window technique is preferred because it always uses the most recent data and excludes effects of early data deviating significantly from the predicted trend. When the predicted results show only small fluctuations, failure is imminent.
NASA Astrophysics Data System (ADS)
Arratia, Cristobal
2014-11-01
A simple construction will be shown, which reveals a general property satisfied by the evolution in time of a state vector composed by a superposition of orthogonal eigenmodes of a linear dynamical system. This property results from the conservation of the inner product between such state vectors evolving forward and backwards in time, and it can be simply evaluated from the state vector and its first and second time derivatives. This provides an efficient way to characterize, instantaneously along any specific phase-space trajectory of the linear system, the relevance of the non-normality of the linearized Navier-Stokes operator on the energy (or any other norm) gain or decay of small perturbations. Examples of this characterization applied to stationary or time dependent base flows will be shown. CONICYT, Concurso de Apoyo al Retorno de Investigadores del Extranjero, folio 821320055.
Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1999-01-01
A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.
Reliability Analysis of the Gradual Degradation of Semiconductor Devices.
1983-07-20
under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation
Does linear separability really matter? Complex visual search is explained by simple search
Vighneshvel, T.; Arun, S. P.
2013-01-01
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822
Gog, Simon; Bader, Martin
2008-10-01
The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.
A linearization time-domain CMOS smart temperature sensor using a curvature compensation oscillator.
Chen, Chun-Chi; Chen, Hao-Wen
2013-08-28
This paper presents an area-efficient time-domain CMOS smart temperature sensor using a curvature compensation oscillator for linearity enhancement with a -40 to 120 °C temperature range operability. The inverter-based smart temperature sensors can substantially reduce the cost and circuit complexity of integrated temperature sensors. However, a large curvature exists on the temperature-to-time transfer curve of the inverter-based delay line and results in poor linearity of the sensor output. For cost reduction and error improvement, a temperature-to-pulse generator composed of a ring oscillator and a time amplifier was used to generate a thermal sensing pulse with a sufficient width proportional to the absolute temperature (PTAT). Then, a simple but effective on-chip curvature compensation oscillator is proposed to simultaneously count and compensate the PTAT pulse with curvature for linearization. With such a simple structure, the proposed sensor possesses an extremely small area of 0.07 mm2 in a TSMC 0.35-mm CMOS 2P4M digital process. By using an oscillator-based scheme design, the proposed sensor achieves a fine resolution of 0.045 °C without significantly increasing the circuit area. With the curvature compensation, the inaccuracy of -1.2 to 0.2 °C is achieved in an operation range of -40 to 120 °C after two-point calibration for 14 packaged chips. The power consumption is measured as 23 mW at a sample rate of 10 samples/s.
Is the local linearity of space-time inherited from the linearity of probabilities?
NASA Astrophysics Data System (ADS)
Müller, Markus P.; Carrozza, Sylvain; Höhn, Philipp A.
2017-02-01
The appearance of linear spaces, describing physical quantities by vectors and tensors, is ubiquitous in all of physics, from classical mechanics to the modern notion of local Lorentz invariance. However, as natural as this seems to the physicist, most computer scientists would argue that something like a ‘local linear tangent space’ is not very typical and in fact a quite surprising property of any conceivable world or algorithm. In this paper, we take the perspective of the computer scientist seriously, and ask whether there could be any inherently information-theoretic reason to expect this notion of linearity to appear in physics. We give a series of simple arguments, spanning quantum information theory, group representation theory, and renormalization in quantum gravity, that supports a surprising thesis: namely, that the local linearity of space-time might ultimately be a consequence of the linearity of probabilities. While our arguments involve a fair amount of speculation, they have the virtue of being independent of any detailed assumptions on quantum gravity, and they are in harmony with several independent recent ideas on emergent space-time in high-energy physics.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
Some Questions Concerning the Standards of External Examinations.
ERIC Educational Resources Information Center
Kahn, Michael J.
1990-01-01
Variance as a function of time is described for the Cambridge Local Examinations Syndicate's examination standards, with emphasis on the performance of candidates from Botswana and Zimbabwe. Results demonstrate the value of simple linear modeling in extracting performance trends for a range of subjects over time across six countries. (TJH)
Ortiz-Rascón, E; Bruce, N C; Rodríguez-Rosales, A A; Garduño-Mejía, J
2016-03-01
We describe the behavior of linearity in diffuse imaging by evaluating the differences between time-resolved images produced by photons arriving at the detector at different times. Two approaches are considered: Monte Carlo simulations and experimental results. The images of two complete opaque bars embedded in a transparent or in a turbid medium with a slab geometry are analyzed; the optical properties of the turbid medium sample are close to those of breast tissue. A simple linearity test was designed involving a direct comparison between the intensity profile produced by two bars scanned at the same time and the intensity profile obtained by adding two profiles of each bar scanned one at a time. It is shown that the linearity improves substantially when short time of flight photons are used in the imaging process, but even then the nonlinear behavior prevails. As the edge response function (ERF) has been used widely for testing the spatial resolution in imaging systems, the main implication of a time dependent linearity is the weakness of the linearity assumption when evaluating the spatial resolution through the ERF in diffuse imaging systems, and the need to evaluate the spatial resolution by other methods.
Real-time biodetection using a smartphone-based dual-color surface plasmon resonance sensor
NASA Astrophysics Data System (ADS)
Liu, Qiang; Yuan, Huizhen; Liu, Yun; Wang, Jiabin; Jing, Zhenguo; Peng, Wei
2018-04-01
We proposed a compact and cost-effective red-green dual-color fiber optic surface plasmon resonance (SPR) sensor based on the smartphone. Inherent color selectivity of phone cameras was utilized for real-time monitoring of red and green color channels simultaneously, which can reduce the chance of false detection and improve the sensitivity. Because there are no external prisms, complex optical lenses, or diffraction grating, simple optical configuration is realized. It has a linear response in a refractive index range of 1.326 to 1.351 (R2 = 0.991) with a resolution of 2.3 × 10 - 4 RIU. We apply it for immunoglobulin G (IgG) concentration measurement. Experimental results demonstrate that a linear SPR response was achieved for IgG concentrations varying from 0.02 to 0.30 mg / ml with good repeatability. It may find promising applications in the fields of public health and environment monitoring owing to its simple optics design and applicability in real-time, label-free biodetection.
Communication: Symmetrical quasi-classical analysis of linear optical spectroscopy
NASA Astrophysics Data System (ADS)
Provazza, Justin; Coker, David F.
2018-05-01
The symmetrical quasi-classical approach for propagation of a many degree of freedom density matrix is explored in the context of computing linear spectra. Calculations on a simple two state model for which exact results are available suggest that the approach gives a qualitative description of peak positions, relative amplitudes, and line broadening. Short time details in the computed dipole autocorrelation function result in exaggerated tails in the spectrum.
Thermo-optical dynamics in an optically pumped Photonic Crystal nano-cavity.
Brunstein, M; Braive, R; Hostein, R; Beveratos, A; Rober-Philip, I; Sagnes, I; Karle, T J; Yacomotti, A M; Levenson, J A; Moreau, V; Tessier, G; De Wilde, Y
2009-09-14
Linear and non-linear thermo-optical dynamical regimes were investigated in a photonic crystal cavity. First, we have measured the thermal relaxation time in an InP-based nano-cavity with quantum dots in the presence of optical pumping. The experimental method presented here allows one to obtain the dynamics of temperature in a nanocavity based on reflectivity measurements of a cw probe beam coupled through an adiabatically tapered fiber. Characteristic times of 1.0+/-0.2 micros and 0.9+/-0.2 micros for the heating and the cooling processes were obtained. Finally, thermal dynamics were also investigated in a thermo-optical bistable regime. Switch-on/off times of 2 micros and 4 micros respectively were measured, which could be explained in terms of a simple non-linear dynamical representation.
We use a simple nitrogen budget model to analyze concentrations of total nitrogen (TN) in estuaries for which both nitrogen inputs and water residence time are correlated with freshwater inflow rates. While the nitrogen concentration of an estuary varies linearly with TN loading ...
Relating Time-Dependent Acceleration and Height Using an Elevator
ERIC Educational Resources Information Center
Kinser, Jason M.
2015-01-01
A simple experiment in relating a time-dependent linear acceleration function to height is explored through the use of a smartphone and an elevator. Given acceleration as a function of time, a(t), the velocity function and position functions are determined through integration as in v(t)=? a(t) dt (1) and x(t)=? v(t) dt. Mobile devices such as…
ERIC Educational Resources Information Center
Knechtle, Beat; Wirth, Andrea; Baumann, Barbara; Knechtle, Patrizia; Rosemann, Thomas
2010-01-01
We studied male and female nonprofessional Ironman triathletes to determine whether percent body fat, training, and/or previous race experience were associated with race performance. We used simple linear regression analysis, with total race time as the dependent variable, to investigate the relationship among athletes' percent body fat, average…
Measurements of the thickness of in-place concrete with microwave reflection.
DOT National Transportation Integrated Search
1988-01-01
Previous microwave reflection measurements made on simple, unreinforced concrete blocks have shown that the transit time of a microwave through concrete is linearly related to its thickness. In this study measurements were conducted on concrete slabs...
A simple strategy for varying the restart parameter in GMRES(m)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Jessup, E R; Kolev, T V
2007-10-02
When solving a system of linear equations with the restarted GMRES method, a fixed restart parameter is typically chosen. We present numerical experiments that demonstrate the beneficial effects of changing the value of the restart parameter in each restart cycle on the total time to solution. We propose a simple strategy for varying the restart parameter and provide some heuristic explanations for its effectiveness based on analysis of the symmetric case.
Conceptual models governing leaching behavior and their long-term predictive capability
Claassen, Hans C.
1981-01-01
Six models that may be used to describe the interaction of radioactive waste solids with aqueous solutions are as follows:Simple linear mass transfer;Simple parabolic mass transfer;Parabolic mass transfer with the formation of a diffusion-limiting surface layer at an arbitrary time;Initial parabolic mass transfer followed by linear mass transfer at an arbitrary time;Parabolic (or linear) mass transfer and concomitant surface sorption; andParabolic (or linear) mass transfer and concomitant chemical precipitation.Some of these models lead to either illogical or unrealistic predictions when published data are extrapolated to long times. These predictions result because most data result from short-term experimentation. Probably for longer times, processes will occur that have not been observed in the shorter experiments. This hypothesis has been verified by mass-transfer data from laboratory experiments using natural volcanic glass to predict the composition of groundwater. That such rate-limiting mechanisms do occur is reassuring, although now it is not possible to deduce a single mass-transfer limiting mechanism that could control the solution concentration of all components of all waste forms being investigated. Probably the most reasonable mechanisms are surface sorption and chemical precipitation of the species of interest. Another is limiting of mass transfer by chemical precipitation on the waste form surface of a substance not containing the species of interest, that is, presence of a diffusion-limiting layer. The presence of sorption and chemical precipitation as factors limiting mass transfer has been verified in natural groundwater systems, whereas the diffusion-limiting mechanism has not been verified yet.
Ritchie, J Brendan; Carlson, Thomas A
2016-01-01
A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.
Shortcuts to adiabaticity from linear response theory
Acconcia, Thiago V.; Bonança, Marcus V. S.; Deffner, Sebastian
2015-10-23
A shortcut to adiabaticity is a finite-time process that produces the same final state as would result from infinitely slow driving. We show that such shortcuts can be found for weak perturbations from linear response theory. Moreover, with the help of phenomenological response functions, a simple expression for the excess work is found—quantifying the nonequilibrium excitations. For two specific examples, i.e., the quantum parametric oscillator and the spin 1/2 in a time-dependent magnetic field, we show that finite-time zeros of the excess work indicate the existence of shortcuts. We finally propose a degenerate family of protocols, which facilitates shortcuts tomore » adiabaticity for specific and very short driving times.« less
Kelvin-Voigt model of wave propagation in fragmented geomaterials with impact damping
NASA Astrophysics Data System (ADS)
Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady
2017-04-01
When a wave propagates through real materials, energy dissipation occurs. The effect of loss of energy in homogeneous materials can be accounted for by using simple viscous models. However, a reliable model representing the effect in fragmented geomaterials has not been established yet. The main reason for that is a mechanism how vibrations are transmitted between the elements (fragments) in these materials. It is hypothesised that the fragments strike against each other, in the process of oscillation, and the impacts lead to the energy loss. We assume that the energy loss is well represented by the restitution coefficient. The principal element of this concept is the interaction of two adjacent blocks. We model it by a simple linear oscillator (a mass on an elastic spring) with an additional condition: each time the system travels through the neutral point, where the displacement is equal to zero, the velocity reduces by multiplying itself by the restitution coefficient, which characterises an impact of the fragments. This additional condition renders the system non-linear. We show that the behaviour of such a model averaged over times much larger than the system period can approximately be represented by a conventional linear oscillator with linear damping characterised by a damping coefficient expressible through the restitution coefficient. Based on this the wave propagation at times considerably greater than the resonance period of oscillations of the neighbouring blocks can be modelled using the Kelvin-Voigt model. The wave velocities and the dispersion relations are obtained.
Electro-Optic Beam Steering Using Non-Linear Organic Materials
1993-08-01
York (SUNY), Buffalo, for potential application to the Hughes electro - optic beam deflector device. Evaluations include electro - optic coefficient...response time, transmission, and resistivity. Electro - optic coefficient measurements were made at 633 nm using a simple reflection technique. The
Cannibalism and Chaos in the Classroom
ERIC Educational Resources Information Center
Abernethy, Gavin M.; McCartney, Mark
2017-01-01
Two simple discrete-time models of mutation-induced cannibalism are introduced and investigated, one linear and one nonlinear. Both form the basis for possible classroom activities and independent investigative study. A range of classroom exercises are provided, along with suggestions for further investigations.
NASA Astrophysics Data System (ADS)
Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu
2017-12-01
In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.
LLSURE: local linear SURE-based edge-preserving image filtering.
Qiu, Tianshuang; Wang, Aiqi; Yu, Nannan; Song, Aimin
2013-01-01
In this paper, we propose a novel approach for performing high-quality edge-preserving image filtering. Based on a local linear model and using the principle of Stein's unbiased risk estimate as an estimator for the mean squared error from the noisy image only, we derive a simple explicit image filter which can filter out noise while preserving edges and fine-scale details. Moreover, this filter has a fast and exact linear-time algorithm whose computational complexity is independent of the filtering kernel size; thus, it can be applied to real time image processing tasks. The experimental results demonstrate the effectiveness of the new filter for various computer vision applications, including noise reduction, detail smoothing and enhancement, high dynamic range compression, and flash/no-flash denoising.
ERIC Educational Resources Information Center
van der Linden, Wim J.
2011-01-01
A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…
Algorithms for Brownian first-passage-time estimation
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
Timber marking costs in spruce-fir: experience on the Penobscot Experimental Forest
Paul E. Sendak
2002-01-01
In the application of partial harvests, time needs to be allocated to marking trees to be cut. On the Penobscot Experimental Forest located in Maine, eight major experimental treatments have been applied to northern conifer stands for more than 40 yr. Data recorded at the time of marking were used to estimate the time required to mark trees for harvest. A simple linear...
Uplink Packet-Data Scheduling in DS-CDMA Systems
NASA Astrophysics Data System (ADS)
Choi, Young Woo; Kim, Seong-Lyun
In this letter, we consider the uplink packet scheduling for non-real-time data users in a DS-CDMA system. As an effort to jointly optimize throughput and fairness, we formulate a time-span minimization problem incorporating the time-multiplexing of different simultaneous transmission schemes. Based on simple rules, we propose efficient scheduling algorithms and compare them with the optimal solution obtained by linear programming.
More memory under evolutionary learning may lead to chaos
NASA Astrophysics Data System (ADS)
Diks, Cees; Hommes, Cars; Zeppini, Paolo
2013-02-01
We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.
Design and analysis of linear cascade DNA hybridization chain reactions using DNA hairpins
NASA Astrophysics Data System (ADS)
Bui, Hieu; Garg, Sudhanshu; Miao, Vincent; Song, Tianqi; Mokhtar, Reem; Reif, John
2017-01-01
DNA self-assembly has been employed non-conventionally to construct nanoscale structures and dynamic nanoscale machines. The technique of hybridization chain reactions by triggered self-assembly has been shown to form various interesting nanoscale structures ranging from simple linear DNA oligomers to dendritic DNA structures. Inspired by earlier triggered self-assembly works, we present a system for controlled self-assembly of linear cascade DNA hybridization chain reactions using nine distinct DNA hairpins. NUPACK is employed to assist in designing DNA sequences and Matlab has been used to simulate DNA hairpin interactions. Gel electrophoresis and ensemble fluorescence reaction kinetics data indicate strong evidence of linear cascade DNA hybridization chain reactions. The half-time completion of the proposed linear cascade reactions indicates a linear dependency on the number of hairpins.
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
High linearity current communicating passive mixer employing a simple resistor bias
NASA Astrophysics Data System (ADS)
Rongjiang, Liu; Guiliang, Guo; Yuepeng, Yan
2013-03-01
A high linearity current communicating passive mixer including the mixing cell and transimpedance amplifier (TIA) is introduced. It employs the resistor in the TIA to reduce the source voltage and the gate voltage of the mixing cell. The optimum linearity and the maximum symmetric switching operation are obtained at the same time. The mixer is implemented in a 0.25 μm CMOS process. The test shows that it achieves an input third-order intercept point of 13.32 dBm, conversion gain of 5.52 dB, and a single sideband noise figure of 20 dB.
Conduction cooling systems for linear accelerator cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kephart, Robert
A conduction cooling system for linear accelerator cavities. The system conducts heat from the cavities to a refrigeration unit using at least one cavity cooler interconnected with a cooling connector. The cavity cooler and cooling connector are both made from solid material having a very high thermal conductivity of approximately 1.times.10.sup.4 W m.sup.-1 K.sup.-1 at temperatures of approximately 4 degrees K. This allows for very simple and effective conduction of waste heat from the linear accelerator cavities to the cavity cooler, along the cooling connector, and thence to the refrigeration unit.
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2015-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.
Linear and nonlinear mechanical properties of a series of epoxy resins
NASA Technical Reports Server (NTRS)
Curliss, D. B.; Caruthers, J. M.
1987-01-01
The linear viscoelastic properties have been measured for a series of bisphenol-A-based epoxy resins cured with the diamine DDS. The linear viscoelastic master curves were constructed via time-temperature superposition of frequency dependent G-prime and G-double-prime isotherms. The G-double-prime master curves exhibited two sub-Tg transitions. Superposition of isotherms in the glass-to-rubber transition (i.e., alpha) and the beta transition at -60 C was achieved by simple horizontal shifts in the log frequency axis; however, in the region between alpha and beta, superposition could not be effected by simple horizontal shifts along the log frequency axis. The different temperature dependency of the alpha and beta relaxation mechanisms causes a complex response of G-double-prime in the so called alpha-prime region. A novel numerical procedure has been developed to extract the complete relaxation spectra and its temperature dependence from the G-prime and G-double-prime isothermal data in the alpha-prime region.
Mental chronometry with simple linear regression.
Chen, J Y
1997-10-01
Typically, mental chronometry is performed by means of introducing an independent variable postulated to affect selectively some stage of a presumed multistage process. However, the effect could be a global one that spreads proportionally over all stages of the process. Currently, there is no method to test this possibility although simple linear regression might serve the purpose. In the present study, the regression approach was tested with tasks (memory scanning and mental rotation) that involved a selective effect and with a task (word superiority effect) that involved a global effect, by the dominant theories. The results indicate (1) the manipulation of the size of a memory set or of angular disparity affects the intercept of the regression function that relates the times for memory scanning with different set sizes or for mental rotation with different angular disparities and (2) the manipulation of context affects the slope of the regression function that relates the times for detecting a target character under word and nonword conditions. These ratify the regression approach as a useful method for doing mental chronometry.
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
NASA Technical Reports Server (NTRS)
Milman, M. H.
1985-01-01
A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.
An Application of the H-Function to Curve-Fitting and Density Estimation.
1983-12-01
equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability
Browning of the landscape of interior Alaska based on 1986-2009 Landsat sensor NDVI
Rebecca A. Baird; David Verbyla; Teresa N. Hollingsworth
2012-01-01
We used a time series of 1986-2009 Landsat sensor data to compute the Normalized Difference Vegetation Index (NDVI) for 30 m pixels within the Bonanza Creek Experimental Forest of interior Alaska. Based on simple linear regression, we found significant (p
Listing triangles in expected linear time on a class of power law graphs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann
Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less
Nonlinear Dynamic Models in Advanced Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.
Simple taper: Taper equations for the field forester
David R. Larsen
2017-01-01
"Simple taper" is set of linear equations that are based on stem taper rates; the intent is to provide taper equation functionality to field foresters. The equation parameters are two taper rates based on differences in diameter outside bark at two points on a tree. The simple taper equations are statistically equivalent to more complex equations. The linear...
ERIC Educational Resources Information Center
Nelson, Dean
2009-01-01
Following the Guidelines for Assessment and Instruction in Statistics Education (GAISE) recommendation to use real data, an example is presented in which simple linear regression is used to evaluate the effect of the Montreal Protocol on atmospheric concentration of chlorofluorocarbons. This simple set of data, obtained from a public archive, can…
Analysis of the linearity of half periods of the Lorentz pendulum
NASA Astrophysics Data System (ADS)
Wickramasinghe, T.; Ochoa, R.
2005-05-01
We analyze the motion of the Lorentz pendulum, a simple pendulum whose length is changed at a constant rate k. We show both analytically and numerically that the half period Tn, the time between half oscillations as measured from midpoint to midpoint, increases linearly with the oscillation number n such that Tn+1-Tn≈kπ2/2g, where g is the acceleration due to gravity. A video camera is used to record the motion of the oscillating bob of the pendulum and verify the linearity of Tn with oscillation number. The theory and the experiment are suitable for an advanced undergraduate laboratory.
Finite-time mixed outer synchronization of complex networks with coupling time-varying delay.
He, Ping; Ma, Shu-Hua; Fan, Tao
2012-12-01
This article is concerned with the problem of finite-time mixed outer synchronization (FMOS) of complex networks with coupling time-varying delay. FMOS is a recently developed generalized synchronization concept, i.e., in which different state variables of the corresponding nodes can evolve into finite-time complete synchronization, finite-time anti-synchronization, and even amplitude finite-time death simultaneously for an appropriate choice of the controller gain matrix. Some novel stability criteria for the synchronization between drive and response complex networks with coupling time-varying delay are derived using the Lyapunov stability theory and linear matrix inequalities. And a simple linear state feedback synchronization controller is designed as a result. Numerical simulations for two coupled networks of modified Chua's circuits are then provided to demonstrate the effectiveness and feasibility of the proposed complex networks control and synchronization schemes and then compared with the proposed results and the previous schemes for accuracy.
Machining Chatter Analysis for High Speed Milling Operations
NASA Astrophysics Data System (ADS)
Sekar, M.; Kantharaj, I.; Amit Siddhappa, Savale
2017-10-01
Chatter in high speed milling is characterized by time delay differential equations (DDE). Since closed form solution exists only for simple cases, the governing non-linear DDEs of chatter problems are solved by various numerical methods. Custom codes to solve DDEs are tedious to build, implement and not error free and robust. On the other hand, software packages provide solution to DDEs, however they are not straight forward to implement. In this paper an easy way to solve DDE of chatter in milling is proposed and implemented with MATLAB. Time domain solution permits the study and model of non-linear effects of chatter vibration with ease. Time domain results are presented for various stable and unstable conditions of cut and compared with stability lobe diagrams.
A simple two-stage model predicts response time distributions.
Carpenter, R H S; Reddi, B A J; Anderson, A J
2009-08-15
The neural mechanisms underlying reaction times have previously been modelled in two distinct ways. When stimuli are hard to detect, response time tends to follow a random-walk model that integrates noisy sensory signals. But studies investigating the influence of higher-level factors such as prior probability and response urgency typically use highly detectable targets, and response times then usually correspond to a linear rise-to-threshold mechanism. Here we show that a model incorporating both types of element in series - a detector integrating noisy afferent signals, followed by a linear rise-to-threshold performing decision - successfully predicts not only mean response times but, much more stringently, the observed distribution of these times and the rate of decision errors over a wide range of stimulus detectability. By reconciling what previously may have seemed to be conflicting theories, we are now closer to having a complete description of reaction time and the decision processes that underlie it.
Simple estimation of linear 1+1 D tsunami run-up
NASA Astrophysics Data System (ADS)
Fuentes, M.; Campos, J. A.; Riquelme, S.
2016-12-01
An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.
NASA Astrophysics Data System (ADS)
Zhang, Chuan; Wang, Xingyuan; Luo, Chao; Li, Junqiu; Wang, Chunpeng
2018-03-01
In this paper, we focus on the robust outer synchronization problem between two nonlinear complex networks with parametric disturbances and mixed time-varying delays. Firstly, a general complex network model is proposed. Besides the nonlinear couplings, the network model in this paper can possess parametric disturbances, internal time-varying delay, discrete time-varying delay and distributed time-varying delay. Then, according to the robust control strategy, linear matrix inequality and Lyapunov stability theory, several outer synchronization protocols are strictly derived. Simple linear matrix controllers are designed to driver the response network synchronize to the drive network. Additionally, our results can be applied on the complex networks without parametric disturbances. Finally, by utilizing the delayed Lorenz chaotic system as the dynamics of all nodes, simulation examples are given to demonstrate the effectiveness of our theoretical results.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
Time dependent density functional calculation of plasmon response in clusters
NASA Astrophysics Data System (ADS)
Wang, Feng; Zhang, Feng-Shou; Eric, Suraud
2003-02-01
We have introduced a theoretical scheme for the efficient description of the optical response of a cluster based on the time-dependent density functional theory. The practical implementation is done by means of the fully fledged time-dependent local density approximation scheme, which is solved directly in the time domain without any linearization. As an example we consider the simple Na2 cluster and compute its surface plasmon photoabsorption cross section, which is in good agreement with the experiments.
Shim, You-Shin; Yoon, Won-Jin; Kim, Dong-Man; Watanabe, Masaki; Park, Hyun-Jin; Jang, Hae Won; Lee, Jangho; Ha, Jaeho
2015-01-01
The simple determination method for anthocyanidin aglycones in fruits using ultra-high-performance liquid chromatography (UHPLC) coupled with the heating-block acidic hydrolysis method was validated through the precision, accuracy and linearity. The UHPLC separation was performed on a reversed-phase C18 column (particle size 2 μm, i.d. 2 mm, length 100 mm) with a photodiode-array detector. The limits of detection and quantification of the UHPLC analyses were 0.09 and 0.29 mg/kg for delphinidin, 0.08 and 0.24 mg/kg for cyanidin, 0.09 and 0.26 mg/kg for petunidin, 0.14 and 0.42 mg/kg for pelargonidin, 0.16 and 0.48 mg/kg for peonidin and 0.30 and 0.91 mg/kg for malvidin, respectively. The intra- and inter-day precisions of individual anthocyanidin aglycones were <10.3%. All calibration curves exhibited good linearity (r = 0.999) within the tested ranges. The total run time of UHPLC was 8 min. The simple preparation method with UHPLC detection in this study presented herein significantly improved the speed and the simplicity for preparation step of delphinidin, cyanidin, petunidin, pelargonidin, peonidin and malvidin in fruits. Especially, the UHPLC detection exhibited good resolution in spite of shorter run time about four times than conventional HPLC detection. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Wang, Yubo; Veluvolu, Kalyana C
2017-06-14
It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.
Simple robust control laws for robot manipulators. Part 2: Adaptive case
NASA Technical Reports Server (NTRS)
Bayard, D. S.; Wen, J. T.
1987-01-01
A new class of asymptotically stable adaptive control laws is introduced for application to the robotic manipulator. Unlike most applications of adaptive control theory to robotic manipulators, this analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and utilizes a parameterization based on physical (time-invariant) quantities. This approach is made possible by using energy-like Lyapunov functions which retain the nonlinear character and structure of the dynamics, rather than simple quadratic forms which are ubiquitous to the adaptive control literature, and which have bound the theory tightly to linear systems with unknown parameters. It is a unique feature of these results that the adaptive forms arise by straightforward certainty equivalence adaptation of their nonadaptive counterparts found in the companion to this paper (i.e., by replacing unknown quantities by their estimates) and that this simple approach leads to asymptotically stable closed-loop adaptive systems. Furthermore, it is emphasized that this approach does not require convergence of the parameter estimates (i.e., via persistent excitation), invertibility of the mass matrix estimate, or measurement of the joint accelerations.
Metric versus observable operator representation, higher spin models
NASA Astrophysics Data System (ADS)
Fring, Andreas; Frith, Thomas
2018-02-01
We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.
On the correct use of stepped-sine excitations for the measurement of time-varying bioimpedance.
Louarroudi, E; Sanchez, B
2017-02-01
When a linear time-varying (LTV) bioimpedance is measured using stepped-sine excitations, a compromise must be made: the temporal distortions affecting the data depend on the experimental time, which in turn sets the data accuracy and limits the temporal bandwidth of the system that needs to be measured. Here, the experimental time required to measure linear time-invariant bioimpedance with a specified accuracy is analyzed for different stepped-sine excitation setups. We provide simple equations that allow the reader to know whether LTV bioimpedance can be measured through repeated time- invariant stepped-sine experiments. Bioimpedance technology is on the rise thanks to a plethora of healthcare monitoring applications. The results presented can help to avoid distortions in the data while measuring accurately non-stationary physiological phenomena. The impact of the work presented is broad, including the potential of enhancing bioimpedance studies and healthcare devices using bioimpedance technology.
EMBEDDED LENSING TIME DELAYS, THE FERMAT POTENTIAL, AND THE INTEGRATED SACHS–WOLFE EFFECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Bin; Kantowski, Ronald; Dai, Xinyu, E-mail: bchen3@fsu.edu
2015-05-01
We derive the Fermat potential for a spherically symmetric lens embedded in a Friedman–Lemaître–Robertson–Walker cosmology and use it to investigate the late-time integrated Sachs–Wolfe (ISW) effect, i.e., secondary temperature fluctuations in the cosmic microwave background (CMB) caused by individual large-scale clusters and voids. We present a simple analytical expression for the temperature fluctuation in the CMB across such a lens as a derivative of the lens’ Fermat potential. This formalism is applicable to both linear and nonlinear density evolution scenarios, to arbitrarily large density contrasts, and to all open and closed background cosmologies. It is much simpler to use andmore » makes the same predictions as conventional approaches. In this approach the total temperature fluctuation can be split into a time-delay part and an evolutionary part. Both parts must be included for cosmic structures that evolve and both can be equally important. We present very simple ISW models for cosmic voids and galaxy clusters to illustrate the ease of use of our formalism. We use the Fermat potentials of simple cosmic void models to compare predicted ISW effects with those recently extracted from WMAP and Planck data by stacking large cosmic voids using the aperture photometry method. If voids in the local universe with large density contrasts are no longer evolving we find that the time delay contribution alone predicts values consistent with the measurements. However, we find that for voids still evolving linearly, the evolutionary contribution cancels a significant part of the time delay contribution and results in predicted signals that are much smaller than recently observed.« less
Assessing the performance of eight real-time updating models and procedures for the Brosna River
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.
2005-10-01
The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.
Comparison of heaving buoy and oscillating flap wave energy converters
NASA Astrophysics Data System (ADS)
Abu Bakar, Mohd Aftar; Green, David A.; Metcalfe, Andrew V.; Najafian, G.
2013-04-01
Waves offer an attractive source of renewable energy, with relatively low environmental impact, for communities reasonably close to the sea. Two types of simple wave energy converters (WEC), the heaving buoy WEC and the oscillating flap WEC, are studied. Both WECs are considered as simple energy converters because they can be modelled, to a first approximation, as single degree of freedom linear dynamic systems. In this study, we estimate the response of both WECs to typical wave inputs; wave height for the buoy and corresponding wave surge for the flap, using spectral methods. A nonlinear model of the oscillating flap WEC that includes the drag force, modelled by the Morison equation is also considered. The response to a surge input is estimated by discrete time simulation (DTS), using central difference approximations to derivatives. This is compared with the response of the linear model obtained by DTS and also validated using the spectral method. Bendat's nonlinear system identification (BNLSI) technique was used to analyze the nonlinear dynamic system since the spectral analysis was only suitable for linear dynamic system. The effects of including the nonlinear term are quantified.
Technical notes and correspondence: Stochastic robustness of linear time-invariant control systems
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Ray, Laura R.
1991-01-01
A simple numerical procedure for estimating the stochastic robustness of a linear time-invariant system is described. Monte Carlo evaluations of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variation. Confidence intervals for the scalar probability of instability address computational issues inherent in Monte Carlo simulation. Trivial extensions of the procedure admit consideration of alternate discriminants; thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions can also be estimated. Results are particularly amenable to graphical presentation.
Transit-time and age distributions for nonlinear time-dependent compartmental systems.
Metzler, Holger; Müller, Markus; Sierra, Carlos A
2018-02-06
Many processes in nature are modeled using compartmental systems (reservoir/pool/box systems). Usually, they are expressed as a set of first-order differential equations describing the transfer of matter across a network of compartments. The concepts of age of matter in compartments and the time required for particles to transit the system are important diagnostics of these models with applications to a wide range of scientific questions. Until now, explicit formulas for transit-time and age distributions of nonlinear time-dependent compartmental systems were not available. We compute densities for these types of systems under the assumption of well-mixed compartments. Assuming that a solution of the nonlinear system is available at least numerically, we show how to construct a linear time-dependent system with the same solution trajectory. We demonstrate how to exploit this solution to compute transit-time and age distributions in dependence on given start values and initial age distributions. Furthermore, we derive equations for the time evolution of quantiles and moments of the age distributions. Our results generalize available density formulas for the linear time-independent case and mean-age formulas for the linear time-dependent case. As an example, we apply our formulas to a nonlinear and a linear version of a simple global carbon cycle model driven by a time-dependent input signal which represents fossil fuel additions. We derive time-dependent age distributions for all compartments and calculate the time it takes to remove fossil carbon in a business-as-usual scenario.
Experimental quantum private queries with linear optics
NASA Astrophysics Data System (ADS)
de Martini, Francesco; Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo; Nagali, Eleonora; Sansoni, Linda; Sciarrino, Fabio
2009-07-01
The quantum private query is a quantum cryptographic protocol to recover information from a database, preserving both user and data privacy: the user can test whether someone has retained information on which query was asked and the database provider can test the amount of information released. Here we discuss a variant of the quantum private query algorithm that admits a simple linear optical implementation: it employs the photon’s momentum (or time slot) as address qubits and its polarization as bus qubit. A proof-of-principle experimental realization is implemented.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
NASA Astrophysics Data System (ADS)
Sander, Tobias; Kresse, Georg
2017-02-01
Linear optical properties can be calculated by solving the time-dependent density functional theory equations. Linearization of the equation of motion around the ground state orbitals results in the so-called Casida equation, which is formally very similar to the Bethe-Salpeter equation. Alternatively one can determine the spectral functions by applying an infinitely short electric field in time and then following the evolution of the electron orbitals and the evolution of the dipole moments. The long wavelength response function is then given by the Fourier transformation of the evolution of the dipole moments in time. In this work, we compare the results and performance of these two approaches for the projector augmented wave method. To allow for large time steps and still rely on a simple difference scheme to solve the differential equation, we correct for the errors in the frequency domain, using a simple analytic equation. In general, we find that both approaches yield virtually indistinguishable results. For standard density functionals, the time evolution approach is, with respect to the computational performance, clearly superior compared to the solution of the Casida equation. However, for functionals including nonlocal exchange, the direct solution of the Casida equation is usually much more efficient, even though it scales less beneficial with the system size. We relate this to the large computational prefactors in evaluating the nonlocal exchange, which renders the time evolution algorithm fairly inefficient.
Using complexity metrics with R-R intervals and BPM heart rate measures.
Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie
2013-01-01
Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics-fractal (DFA) and recurrence (RQA) analyses-reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, "oversampled" BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics.
Using complexity metrics with R-R intervals and BPM heart rate measures
Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie
2013-01-01
Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics—fractal (DFA) and recurrence (RQA) analyses—reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, “oversampled” BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics. PMID:23964244
The Behavioral Economics of Choice and Interval Timing
Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.
2009-01-01
We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985
Subjective Estimation of Task Time and Task Difficulty of Simple Movement Tasks.
Chan, Alan H S; Hoffmann, Errol R
2017-01-01
It has been demonstrated in previous work that the same neural structures are used for both imagined and real movements. To provide a strong test of the similarity of imagined and actual movement times, 4 simple movement tasks were used to determine the relationship between estimated task time and actual movement time. The tasks were single-component visually controlled movements, 2-component visually controlled, low index of difficulty (ID) moves and pin-to-hole transfer movements. For each task there was good correspondence between the mean estimated times and actual movement times. In all cases, the same factors determined the actual and estimated movement times: the amplitudes of movement and the IDs of the component movements, however the contribution of each of these variables differed for the imagined and real tasks. Generally, the standard deviations of the estimated times were linearly related to the estimated time values. Overall, the data provide strong evidence for the same neural structures being used for both imagined and actual movements.
ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations
NASA Astrophysics Data System (ADS)
Merkel, M.; Niyonzima, I.; Schöps, S.
2017-12-01
Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.
Morse Code, Scrabble, and the Alphabet
ERIC Educational Resources Information Center
Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss
2004-01-01
In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thappily, Praveen, E-mail: pravvmon@gmail.com, E-mail: shiiuvenus@gmail.com; Shiju, K., E-mail: pravvmon@gmail.com, E-mail: shiiuvenus@gmail.com
Green synthesis of silver nanoparticles was achieved by simple visible light irradiation using aloe barbadensis leaf extract as reducing agent. UV-Vis spectroscopic analysis was used for confirmation of the successful formation of nanoparticles. Investigated the effect of light irradiation time on the light absorption of the nanoparticles. It is observed that upto 25 minutes of light irradiation, the absorption is linearly increasing with time and after that it becomes saturated. Finally, theoretically fitted the time-absorption graph and modeled a relation between them with the help of simulation software.
Lorenzetti, Silvio; Lamparter, Thomas; Lüthy, Fabian
2017-12-06
The velocity of a barbell can provide important insights on the performance of athletes during strength training. The aim of this work was to assess the validity and reliably of four simple measurement devices that were compared to 3D motion capture measurements during squatting. Nine participants were assessed when performing 2 × 5 traditional squats with a weight of 70% of the 1 repetition maximum and ballistic squats with a weight of 25 kg. Simultaneously, data was recorded from three linear position transducers (T-FORCE, Tendo Power and GymAware), an accelerometer based system (Myotest) and a 3D motion capture system (Vicon) as the Gold Standard. Correlations between the simple measurement devices and 3D motion capture of the mean and the maximal velocity of the barbell, as well as the time to maximal velocity, were calculated. The correlations during traditional squats were significant and very high (r = 0.932, 0.990, p < 0.01) and significant and moderate to high (r = 0.552, 0.860, p < 0.01). The Myotest could only be used during the ballistic squats and was less accurate. All the linear position transducers were able to assess squat performance, particularly during traditional squats and especially in terms of mean velocity and time to maximal velocity.
A standing wave linear ultrasonic motor operating in in-plane expanding and bending modes.
Chen, Zhijiang; Li, Xiaotian; Ci, Penghong; Liu, Guoxi; Dong, Shuxiang
2015-03-01
A novel standing wave linear ultrasonic motor operating in in-plane expanding and bending modes was proposed in this study. The stator (or actuator) of the linear motor was made of a simple single Lead Zirconate Titanate (PZT) ceramic square plate (15 × 15 × 2 mm(3)) with a circular hole (D = 6.7 mm) in the center. The geometric parameters of the stator were computed with the finite element analysis to produce in-plane bi-mode standing wave vibration. The calculated results predicted that a driving tip attached at midpoint of one edge of the stator can produce two orthogonal, approximate straight-line trajectories, which can be used to move a slider in linear motion via frictional forces in forward or reverse direction. The investigations showed that the proposed linear motor can produce a six times higher power density than that of a previously reported square plate motor.
A linear acoustic model for intake wave dynamics in IC engines
NASA Astrophysics Data System (ADS)
Harrison, M. F.; Stanev, P. T.
2004-01-01
In this paper, a linear acoustic model is described that has proven useful in obtaining a better understanding of the nature of acoustic wave dynamics in the intake system of an internal combustion (IC) engine. The model described has been developed alongside a set of measurements made on a Ricardo E6 single cylinder research engine. The simplified linear acoustic model reported here produces a calculation of the pressure time-history in the port of an IC engine that agrees fairly well with measured data obtained on the engine fitted with a simple intake system. The model has proved useful in identifying the role of pipe resonance in the intake process and has led to the development of a simple hypothesis to explain the structure of the intake pressure time history: the early stages of the intake process are governed by the instantaneous values of the piston velocity and the open area under the valve. Thereafter, resonant wave action dominates the process. The depth of the early depression caused by the moving piston governs the intensity of the wave action that follows. A pressure ratio across the valve that is favourable to inflow is maintained and maximized when the open period of the valve is such to allow at least, but no more than, one complete oscillation of the pressure at its resonant frequency to occur while the valve is open.
Darsazan, Bahar; Shafaati, Alireza; Mortazavi, Seyed Alireza; Zarghi, Afshin
2017-01-01
A simple and reliable stability-indicating RP-HPLC method was developed and validated for analysis of adefovir dipivoxil (ADV).The chromatographic separation was performed on a C 18 column using a mixture of acetonitrile-citrate buffer (10 mM at pH 5.2) 36:64 (%v/v) as mobile phase, at a flow rate of 1.5 mL/min. Detection was carried out at 260 nm and a sharp peak was obtained for ADV at a retention time of 5.8 ± 0.01 min. No interferences were observed from its stress degradation products. The method was validated according to the international guidelines. Linear regression analysis of data for the calibration plot showed a linear relationship between peak area and concentration over the range of 0.5-16 μg/mL; the regression coefficient was 0.9999and the linear regression equation was y = 24844x-2941.3. The detection (LOD) and quantification (LOQ) limits were 0.12 and 0.35 μg/mL, respectively. The results proved the method was fast (analysis time less than 7 min), precise, reproducible, and accurate for analysis of ADV over a wide range of concentration. The proposed specific method was used for routine quantification of ADV in pharmaceutical bulk and a tablet dosage form.
NASA Astrophysics Data System (ADS)
Sun, Jingliang; Liu, Chunsheng
2018-01-01
In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.
NASA Astrophysics Data System (ADS)
Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.
2018-05-01
In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.
2011-01-01
Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P; Oken, Barry S
2011-10-01
To determine (1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and (2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings 2 weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Time domain, especially mean R-R interval (RRI), frequency domain and, among non-linear parameters - Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
ΛCDM Cosmology for Astronomers
NASA Astrophysics Data System (ADS)
Condon, J. J.; Matthews, A. M.
2018-07-01
The homogeneous, isotropic, and flat ΛCDM universe favored by observations of the cosmic microwave background can be described using only Euclidean geometry, locally correct Newtonian mechanics, and the basic postulates of special and general relativity. We present simple derivations of the most useful equations connecting astronomical observables (redshift, flux density, angular diameter, brightness, local space density, ...) with the corresponding intrinsic properties of distant sources (lookback time, distance, spectral luminosity, linear size, specific intensity, source counts, ...). We also present an analytic equation for lookback time that is accurate within 0.1% for all redshifts z. The exact equation for comoving distance is an elliptic integral that must be evaluated numerically, but we found a simple approximation with errors <0.2% for all redshifts up to z ≈ 50.
A simple filter circuit for denoising biomechanical impact signals.
Subramaniam, Suba R; Georgakis, Apostolos
2009-01-01
We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.
Impulse Measurement Using an Arduíno
ERIC Educational Resources Information Center
Espindola, P. R.; Cena, C. R.; Alves, D. C. B.; Bozano, D. F.; Goncalves, A. M. B.
2018-01-01
In this paper, we propose a simple experimental apparatus that can measure the force variation over time to study the impulse-momentum theorem. In this proposal, a body attached to a rubber string falls freely from rest until it stretches and changes the linear momentum. During that process the force due to the tension on the rubber string is…
Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.
Trninić, Marko; Jeličić, Mario; Papić, Vladan
2015-07-01
In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.
Hasegawa, Chihiro; Duffull, Stephen B
2018-02-01
Pharmacokinetic-pharmacodynamic systems are often expressed with nonlinear ordinary differential equations (ODEs). While there are numerous methods to solve such ODEs these methods generally rely on time-stepping solutions (e.g. Runge-Kutta) which need to be matched to the characteristics of the problem at hand. The primary aim of this study was to explore the performance of an inductive approximation which iteratively converts nonlinear ODEs to linear time-varying systems which can then be solved algebraically or numerically. The inductive approximation is applied to three examples, a simple nonlinear pharmacokinetic model with Michaelis-Menten elimination (E1), an integrated glucose-insulin model and an HIV viral load model with recursive feedback systems (E2 and E3, respectively). The secondary aim of this study was to explore the potential advantages of analytically solving linearized ODEs with two examples, again E3 with stiff differential equations and a turnover model of luteinizing hormone with a surge function (E4). The inductive linearization coupled with a matrix exponential solution provided accurate predictions for all examples with comparable solution time to the matched time-stepping solutions for nonlinear ODEs. The time-stepping solutions however did not perform well for E4, particularly when the surge was approximated by a square wave. In circumstances when either a linear ODE is particularly desirable or the uncertainty in matching the integrator to the ODE system is of potential risk, then the inductive approximation method coupled with an analytical integration method would be an appropriate alternative.
Volatility of linear and nonlinear time series
NASA Astrophysics Data System (ADS)
Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo
2005-07-01
Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.
NASA Astrophysics Data System (ADS)
Minton, Allen
2014-08-01
A linear increase in the concentration of "inert" macromolecules with time is incorporated into simple excluded volume models for protein condensation or fibrillation. Such models predict a long latent period during which no significant amount of protein aggregates, followed by a steep increase in the total amount of aggregate. The elapsed time at which these models predict half-conversion of model protein to aggregate varies by less than a factor of two when the intrinsic rate constant for condensation or fibril growth of the protein is varied over many orders of magnitude. It is suggested that this concept can explain why the symptoms of neurodegenerative diseases associated with the aggregation of very different proteins and peptides appear at approximately the same advanced age in humans.
LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data.
Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A
2011-01-01
Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.
LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data
Pernet, Cyril R.; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A.
2011-01-01
Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses. PMID:21403915
NASA Astrophysics Data System (ADS)
Jiang, YuXiao; Guo, PengLiang; Gao, ChengYan; Wang, HaiBo; Alzahrani, Faris; Hobiny, Aatef; Deng, FuGuo
2017-12-01
We present an original self-error-rejecting photonic qubit transmission scheme for both the polarization and spatial states of photon systems transmitted over collective noise channels. In our scheme, we use simple linear-optical elements, including half-wave plates, 50:50 beam splitters, and polarization beam splitters, to convert spatial-polarization modes into different time bins. By using postselection in different time bins, the success probability of obtaining the uncorrupted states approaches 1/4 for single-photon transmission, which is not influenced by the coefficients of noisy channels. Our self-error-rejecting transmission scheme can be generalized to hyperentangled n-photon systems and is useful in practical high-capacity quantum communications with photon systems in two degrees of freedom.
A simple smoothness indicator for the WENO scheme with adaptive order
NASA Astrophysics Data System (ADS)
Huang, Cong; Chen, Li Li
2018-01-01
The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.
Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool
NASA Astrophysics Data System (ADS)
Chakraborty, Monisha; Ghosh, Dipak
2017-12-01
Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.
Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool
NASA Astrophysics Data System (ADS)
Chakraborty, Monisha; Ghosh, Dipak
2018-04-01
Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
Digitally gain controlled linear high voltage amplifier for laboratory applications.
Koçum, C
2011-08-01
The design of a digitally gain controlled high-voltage non-inverting bipolar linear amplifier is presented. This cost efficient and relatively simple circuit has stable operation range from dc to 90 kHz under the load of 10 kΩ and 39 pF. The amplifier can swing up to 360 V(pp) under these conditions and it has 2.5 μs rise time. The gain can be changed by the aid of JFETs. The amplifiers have been realized using a combination of operational amplifiers and high-voltage discrete bipolar junction transistors. The circuit details and performance characteristics are discussed.
NASA Astrophysics Data System (ADS)
Yan, David; Bazant, Martin Z.; Biesheuvel, P. M.; Pugh, Mary C.; Dawson, Francis P.
2017-03-01
Linear sweep and cyclic voltammetry techniques are important tools for electrochemists and have a variety of applications in engineering. Voltammetry has classically been treated with the Randles-Sevcik equation, which assumes an electroneutral supported electrolyte. In this paper, we provide a comprehensive mathematical theory of voltammetry in electrochemical cells with unsupported electrolytes and for other situations where diffuse charge effects play a role, and present analytical and simulated solutions of the time-dependent Poisson-Nernst-Planck equations with generalized Frumkin-Butler-Volmer boundary conditions for a 1:1 electrolyte and a simple reaction. Using these solutions, we construct theoretical and simulated current-voltage curves for liquid and solid thin films, membranes with fixed background charge, and cells with blocking electrodes. The full range of dimensionless parameters is considered, including the dimensionless Debye screening length (scaled to the electrode separation), Damkohler number (ratio of characteristic diffusion and reaction times), and dimensionless sweep rate (scaled to the thermal voltage per diffusion time). The analysis focuses on the coupling of Faradaic reactions and diffuse charge dynamics, although capacitive charging of the electrical double layers is also studied, for early time transients at reactive electrodes and for nonreactive blocking electrodes. Our work highlights cases where diffuse charge effects are important in the context of voltammetry, and illustrates which regimes can be approximated using simple analytical expressions and which require more careful consideration.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle
NASA Astrophysics Data System (ADS)
Oppenheim, Jacob N.; Magnasco, Marcelo O.
2013-01-01
The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4π). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.
Correlation and simple linear regression.
Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G
2003-06-01
In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Teaching the Concept of Breakdown Point in Simple Linear Regression.
ERIC Educational Resources Information Center
Chan, Wai-Sum
2001-01-01
Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng
2015-01-01
Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan
2009-01-01
The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.
Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V
2003-10-12
The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
NASA Astrophysics Data System (ADS)
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-05-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
Parametric resonance in the early Universe—a fitting analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es
Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Lundblad, Runar; Abdelnoor, Michel; Svennevig, Jan Ludvig
2004-09-01
Simple linear resection and endoventricular patch plasty are alternative techniques to repair postinfarction left ventricular aneurysm. The aim of the study was to compare these 2 methods with regard to early mortality and long-term survival. We retrospectively reviewed 159 patients undergoing operations between 1989 and 2003. The epidemiologic design was of an exposed (simple linear repair, n = 74) versus nonexposed (endoventricular patch plasty, n = 85) cohort with 2 endpoints: early mortality and long-term survival. The crude effect of aneurysm repair technique versus endpoint was estimated by odds ratio, rate ratio, or relative risk and their 95% confidence intervals. Stratification analysis by using the Mantel-Haenszel method was done to quantify confounders and pinpoint effect modifiers. Adjustment for multiconfounders was performed by using logistic regression and Cox regression analysis. Survival curves were analyzed with the Breslow test and the log-rank test. Early mortality was 8.2% for all patients, 13.5% after linear repair and 3.5% after endoventricular patch plasty. When adjusted for multiconfounders, the risk of early mortality was significantly higher after simple linear repair than after endoventricular patch plasty (odds ratio, 4.4; 95% confidence interval, 1.1-17.8). Mean follow-up was 5.8 +/- 3.8 years (range, 0-14.0 years). Overall 5-year cumulative survival was 78%, 70.1% after linear repair and 91.4% after endoventricular patch plasty. The risk of total mortality was significantly higher after linear repair than after endoventricular patch plasty when controlled for multiconfounders (relative risk, 4.5; 95% confidence interval, 2.0-9.7). Linear repair dominated early in the series and patch plasty dominated later, giving a possible learning-curve bias in favor of patch plasty that could not be adjusted for in the regression analysis. Postinfarction left ventricular aneurysm can be repaired with satisfactory early and late results. Surgical risk was lower and long-term survival was higher after endoventricular patch plasty than simple linear repair. Differences in outcome should be interpreted with care because of the retrospective study design and the chronology of the 2 repair methods.
NASA Astrophysics Data System (ADS)
Hargrove, W. W.; Norman, S. P.; Kumar, J.; Hoffman, F. M.
2017-12-01
National-scale polar analysis of MODIS NDVI allows quantification of degree of seasonality expressed by local vegetation, and also selects the most optimum start/end of a local "phenological year" that is empirically customized for the vegetation that is growing at each location. Interannual differences in timing of phenology make direct comparisons of vegetation health and performance between years difficult, whether at the same or different locations. By "sliding" the two phenologies in time using a Procrustean linear time shift, any particular phenological event or "completion milestone" can be synchronized, allowing direct comparison of differences in timing of other remaining milestones. Going beyond a simple linear translation, time can be "rubber-sheeted," compressed or dilated. Considering one phenology curve to be a reference, the second phenology can be "rubber-sheeted" to fit that baseline as well as possible by stretching or shrinking time to match multiple control points, which can be any recognizable phenological events. Similar to "rubber sheeting" to georectify a map inside a GIS, rubber sheeting a phenology curve also yields a warping signature that shows at every time and every location how many days the adjusted phenology is ahead or behind the phenological development of the reference vegetation. Using such temporal methods to "adjust" phenologies may help to quantify vegetation impacts from frost, drought, wildfire, insects and diseases by permitting the most commensurate quantitative comparisons with unaffected vegetation.
On the vertical resolution for near-nadir looking spaceborne rain radar
NASA Astrophysics Data System (ADS)
Kozu, Toshiaki
A definition of radar resolution for an arbitrary direction is proposed and used to calculate the vertical resolution for a near-nadir looking spaceborne rain radar. Based on the calculation result, a scanning strategy is proposed which efficiently distributes the measurement time to each angle bin and thus increases the number of independent samples compared with a simple linear scanning.
Stationarity conditions for physicochemical processes in the interior ballistics of a gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipanov, A.M.
1995-09-01
An original method is proposed for ensuring time-invariant (stationary) interior ballistic parameters in the postprojectile space of a gun barrel. Stationarity of the parameters is achieved by investing the solid-propellant charge with highly original structures that produce the required pressure condition and linear growth of the projectile velocity. Simple relations are obtained for calculating the principal characteristics.
Ranking Forestry Investments With Parametric Linear Programming
Paul A. Murphy
1976-01-01
Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.
Macroscopic Spatial Complexity of the Game of Life Cellular Automaton: A Simple Data Analysis
NASA Astrophysics Data System (ADS)
Hernández-Montoya, A. R.; Coronel-Brizio, H. F.; Rodríguez-Achach, M. E.
In this chapter we present a simple data analysis of an ensemble of 20 time series, generated by averaging the spatial positions of the living cells for each state of the Game of Life Cellular Automaton (GoL). We show that at the macroscopic level described by these time series, complexity properties of GoL are also presented and the following emergent properties, typical of data extracted complex systems such as financial or economical come out: variations of the generated time series following an asymptotic power law distribution, large fluctuations tending to be followed by large fluctuations, and small fluctuations tending to be followed by small ones, and fast decay of linear correlations, however, the correlations associated to their absolute variations exhibit a long range memory. Finally, a Detrended Fluctuation Analysis (DFA) of the generated time series, indicates that the GoL spatial macro states described by the time series are not either completely ordered or random, in a measurable and very interesting way.
Paper-Based Electrochemical Detection of Chlorate
Shriver-Lake, Lisa C.; Zabetakis, Dan; Dressick, Walter J.; Stenger, David A.; Trammell, Scott A.
2018-01-01
We describe the use of a paper-based probe impregnated with a vanadium-containing polyoxometalate anion, [PMo11VO40]5−, on screen-printed carbon electrodes for the electrochemical determination of chlorate. Cyclic voltammetry (CV) and chronocoulometry were used to characterize the ClO3− response in a pH = 2.5 solution of 100 mM sodium acetate. A linear CV current response was observed between 0.156 and 1.25 mg/mL with a detection limit of 0.083 mg/mL (S/N > 3). This performance was reproducible using [PMo11VO40]5−-impregnated filter paper stored under ambient conditions for as long as 8 months prior to use. At high concentration of chlorate, an additional catalytic cathodic peak was seen in the reverse scan of the CVs, which was digitally simulated using a simple model. For chronocoulometry, the charge measured after 5 min gave a linear response from 0.625 to 2.5 mg/mL with a detection limit of 0.31 mg/mL (S/N > 3). In addition, the slope of charge vs. time also gave a linear response. In this case the linear range was from 0.312 to 2.5 mg/mL with a detection limit of 0.15 mg/mL (S/N > 3). Simple assays were conducted using three types of soil, and recovery measurements reported. PMID:29364153
Visvanathan, Rizliya; Jayathilake, Chathuni; Liyanage, Ruvini
2016-11-15
For the first time, a reliable, simple, rapid and high-throughput analytical method for the detection and quantification of α-amylase inhibitory activity using the glucose assay kit was developed. The new method facilitates rapid screening of a large number of samples, reduces labor, time and reagents and is also suitable for kinetic studies. This method is based on the reaction of maltose with glucose oxidase (GOD) and the development of a red quinone. The test is done in microtitre plates with a total volume of 260μL and an assay time of 40min including the pre-incubation steps. The new method is tested for linearity, sensitivity, precision, reproducibility and applicability. The new method is also compared with the most commonly used 3,5-dinitrosalicylic acid (DNSA) method for determining α-amylase activity. Copyright © 2016 Elsevier Ltd. All rights reserved.
A discrete Markov metapopulation model for persistence and extinction of species.
Thompson, Colin J; Shtilerman, Elad; Stone, Lewi
2016-09-07
A simple discrete generation Markov metapopulation model is formulated for studying the persistence and extinction dynamics of a species in a given region which is divided into a large number of sites or patches. Assuming a linear site occupancy probability from one generation to the next we obtain exact expressions for the time evolution of the expected number of occupied sites and the mean-time to extinction (MTE). Under quite general conditions we show that the MTE, to leading order, is proportional to the logarithm of the initial number of occupied sites and in precise agreement with similar expressions for continuous time-dependent stochastic models. Our key contribution is a novel application of generating function techniques and simple asymptotic methods to obtain a second order asymptotic expression for the MTE which is extremely accurate over the entire range of model parameter values. Copyright © 2016 Elsevier Ltd. All rights reserved.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
NASA Astrophysics Data System (ADS)
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-10-01
We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.
Ozeki, Kazuhisa; Kato, Motohiro; Sakurai, Yuuji; Ishigai, Masaki; Kudo, Toshiyuki; Ito, Kiyomi
2015-11-30
In a transcellular transport study, the apparent permeability coefficient (Papp) of a compound is evaluated using the range by which the amount of compound accumulated on the receiver side is assumed to be proportional to time. However, the time profile of the concentration of the compound in receiver (C3) often shows a lag time before reaching the linear range and later changes from linear to steady state. In this study, the linear range needed to calculate Papp in the C3-time profile was evaluated by a 3-compartment model. C3 was described by an equation with two steady states (C3=A3(1-e(-αt))+B3(1-e(-βt)), α>β), and by a simple approximate line (C3=A3-A3×αt) in the time range of 3/α
Time series with tailored nonlinearities
NASA Astrophysics Data System (ADS)
Räth, C.; Laut, I.
2015-10-01
It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.
Control of femtosecond laser driven retro-Diels-Alder-like reaction of dicyclopentadiene
Das, Dipak Kumar; Goswami, Tapas; Goswami, Debabrata
2013-01-01
Using femtosecond time resolved degenerate pump-probe mass spectrometry coupled with simple linearly chirped frequency modulated pulse, we elucidate that the dynamics of retro-Diels-Alder-like reaction of diclopentadiene (DCPD) to cyclopentadiene (CPD) in supersonic molecular beam occurs in ultrafast time scale. Negatively chirped pulse enhances the ion yield of CPD, as compared to positively chirped pulse. This indicates that by changing the frequency (chirp) of the laser pulse we can control the ion yield of a chemical reaction. PMID:23814449
2012-10-01
black and approximations in cyan and magenta. The second ODE is the pendulum equation, given by: This ODE was also implemented using Crank...The drawback of approaches like the one proposed can be observed with a very simple example. Suppose vector is found by applying 4 linear...public release; distribution unlimited Figure 2. A phase space plot of the Pendulum example. Fine solution (black) contains 32768 time steps
A Dynamic Model for C3 Information Incorporating the Effects of Counter C3
1980-12-01
birth and death rates exactly cancel one another and H = 0. Although this simple first order linear system is not very sophisti- cated, we see...per hour and refer to the average behavior of the entire system ensemble much as species birth and death rates are typically measured in births (or...unit time) iii) VTX, VIY ; Uncertainty Death Rates resulting from data inputs (bits/bit per unit time) 3 -1 iv) YYV» YvY > Counter C
Distributed ultrafast fibre laser
Liu, Xueming; Cui, Yudong; Han, Dongdong; Yao, Xiankun; Sun, Zhipei
2015-01-01
A traditional ultrafast fibre laser has a constant cavity length that is independent of the pulse wavelength. The investigation of distributed ultrafast (DUF) lasers is conceptually and technically challenging and of great interest because the laser cavity length and fundamental cavity frequency are changeable based on the wavelength. Here, we propose and demonstrate a DUF fibre laser based on a linearly chirped fibre Bragg grating, where the total cavity length is linearly changeable as a function of the pulse wavelength. The spectral sidebands in DUF lasers are enhanced greatly, including the continuous-wave (CW) and pulse components. We observe that all sidebands of the pulse experience the same round-trip time although they have different round-trip distances and refractive indices. The pulse-shaping of the DUF laser is dominated by the dissipative processes in addition to the phase modulations, which makes our ultrafast laser simple and stable. This laser provides a simple, stable, low-cost, ultrafast-pulsed source with controllable and changeable cavity frequency. PMID:25765454
Featural and temporal attention selectively enhance task-appropriate representations in human V1
Warren, Scott; Yacoub, Essa; Ghose, Geoffrey
2015-01-01
Our perceptions are often shaped by focusing our attention toward specific features or periods of time irrespective of location. We explore the physiological bases of these non-spatial forms of attention by imaging brain activity while subjects perform a challenging change detection task. The task employs a continuously varying visual stimulus that, for any moment in time, selectively activates functionally distinct subpopulations of primary visual cortex (V1) neurons. When subjects are cued to the timing and nature of the change, the mapping of orientation preference across V1 was systematically shifts toward the cued stimulus just prior to its appearance. A simple linear model can explain this shift: attentional changes are selectively targeted toward neural subpopulations representing the attended feature at the times the feature was anticipated. Our results suggest that featural attention is mediated by a linear change in the responses of task-appropriate neurons across cortex during appropriate periods of time. PMID:25501983
Stoykov, Nikolay S; Kuiken, Todd A; Lowery, Madeleine M; Taflove, Allen
2003-09-01
We present what we believe to be the first algorithms that use a simple scalar-potential formulation to model linear Debye and Lorentz dielectric dispersions at low frequencies in the context of finite-element time-domain (FETD) numerical solutions of electric potential. The new algorithms, which permit treatment of multiple-pole dielectric relaxations, are based on the auxiliary differential equation method and are unconditionally stable. We validate the algorithms by comparison with the results of a previously reported method based on the Fourier transform. The new algorithms should be useful in calculating the transient response of biological materials subject to impulsive excitation. Potential applications include FETD modeling of electromyography, functional electrical stimulation, defibrillation, and effects of lightning and impulsive electric shock.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.
Value encoding in single neurons in the human amygdala during decision making.
Jenison, Rick L; Rangel, Antonio; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A
2011-01-05
A growing consensus suggests that the brain makes simple choices by assigning values to the stimuli under consideration and then comparing these values to make a decision. However, the network involved in computing the values has not yet been fully characterized. Here, we investigated whether the human amygdala plays a role in the computation of stimulus values at the time of decision making. We recorded single neuron activity from the amygdala of awake patients while they made simple purchase decisions over food items. We found 16 amygdala neurons, located primarily in the basolateral nucleus that responded linearly to the values assigned to individual items.
Adaptive receiver structures for asynchronous CDMA systems
NASA Astrophysics Data System (ADS)
Rapajic, Predrag B.; Vucetic, Branka S.
1994-05-01
Adaptive linear and decision feedback receiver structures for coherent demodulation in asynchronous code division multiple access (CDMA) systems are considered. It is assumed that the adaptive receiver has no knowledge of the signature waveforms and timing of other users. The receiver is trained by a known training sequence prior to data transmission and continuously adjusted by an adaptive algorithm during data transmission. The proposed linear receiver is as simple as a standard single-user detector receiver consisting of a matched filter with constant coefficients, but achieves essential advantages with respect to timing recovery, multiple access interference elimination, near/far effect, narrowband and frequency-selective fading interference suppression, and user privacy. An adaptive centralized decision feedback receiver has the same advantages of the linear receiver but, in addition, achieves a further improvement in multiple access interference cancellation at the expense of higher complexity. The proposed receiver structures are tested by simulation over a channel with multipath propagation, multiple access interference, narrowband interference, and additive white Gaussian noise.
Directional hearing by linear summation of binaural inputs at the medial superior olive
van der Heijden, Marcel; Lorteije, Jeannette A. M.; Plauška, Andrius; Roberts, Michael T.; Golding, Nace L.; Borst, J. Gerard G.
2013-01-01
SUMMARY Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own “best ITD” to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. PMID:23764292
Microbial detection method based on sensing molecular hydrogen
NASA Technical Reports Server (NTRS)
Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.
1974-01-01
A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.
Multi-Window Controllers for Autonomous Space Systems
NASA Technical Reports Server (NTRS)
Lurie, B, J.; Hadaegh, F. Y.
1997-01-01
Multi-window controllers select between elementary linear controllers using nonlinear windows based on the amplitude and frequency content of the feedback error. The controllers are relatively simple to implement and perform much better than linear controllers. The commanders for such controllers only order the destination point and are freed from generating the command time-profiles. The robotic missions rely heavily on the tasks of acquisition and tracking. For autonomous and optimal control of the spacecraft, the control bandwidth must be larger while the feedback can (and, therefore, must) be reduced.. Combining linear compensators via multi-window nonlinear summer guarantees minimum phase character of the combined transfer function. It is shown that the solution may require using several parallel branches and windows. Several examples of multi-window nonlinear controller applications are presented.
Perceived smoking availability differentially affects mood and reaction time.
Ross, Kathryn C; Juliano, Laura M
2015-06-01
This between subjects study explored the relationship between smoking availability and smoking motivation and is the first study to include three smoking availability time points. This allowed for an examination of an extended period of smoking unavailability, and a test of the linearity of the relationships between smoking availability and smoking motivation measures. Ninety 3-hour abstinent smokers (mean ~15 cigarettes per day) were randomly assigned to one of three availability manipulations while being exposed to smoking stimuli (i.e., pack of cigarettes): smoke in 20 min, smoke in 3 h, or smoke in 24 h. Participants completed pre- and post-manipulation measures of urge, positive affect and negative affect, and simple reaction time. The belief that smoking would next be available in 24 h resulted in a significant decrease in positive affect and increase in negative affect relative to the 3 h and 20 min conditions. A Lack of Fit test suggested a linear relationship between smoking availability and affect. A quadratic model appeared to be a better fit for the relationship between smoking availability and simple reaction time with participants in the 24 h and 20 min conditions showing a greater slowing of reaction time relative to the 3 h condition. There were no effects of the manipulations on self-reported urge, but baseline ceiling effects were noted. Future investigations that manipulate three or more periods of time before smoking is available will help to better elucidate the nature of the relationship between smoking availability and smoking motivation. Copyright © 2015 Elsevier Ltd. All rights reserved.
W5″ Test: A simple method for measuring mean power output in the bench press exercise.
Tous-Fajardo, Julio; Moras, Gerard; Rodríguez-Jiménez, Sergio; Gonzalo-Skok, Oliver; Busquets, Albert; Mujika, Iñigo
2016-11-01
The aims of the present study were to assess the validity and reliability of a novel simple test [Five Seconds Power Test (W5″ Test)] for estimating the mean power output during the bench press exercise at different loads, and its sensitivity to detect training-induced changes. Thirty trained young men completed as many repetitions as possible in a time of ≈5 s at 25%, 45%, 65% and 85% of one-repetition maximum (1RM) in two test sessions separated by four days. The number of repetitions, linear displacement of the bar and time needed to complete the test were recorded by two independent testers, and a linear encoder was used as the criterion measure. For each load, the mean power output was calculated in the W5″ Test as mechanical work per time unit and compared with that obtained from the linear encoder. Subsequently, 20 additional subjects (10 training group vs. 10 control group) were assessed before and after completing a seven-week training programme designed to improve maximal power. Results showed that both assessment methods correlated highly in estimating mean power output at different loads (r range: 0.86-0.94; p < .01) and detecting training-induced changes (R(2): 0.78). Good to excellent intra-tester (intraclass correlation coefficient (ICC) range: 0.81-0.97) and excellent inter-tester (ICC range: 0.96-0.99; coefficient of variation range: 2.4-4.1%) reliability was found for all loads. The W5″ Test was shown to be a valid, reliable and sensitive method for measuring mean power output during the bench press exercise in subjects who have previous resistance training experience.
Never judge a black hole by its area
NASA Astrophysics Data System (ADS)
Ong, Yen Chin
2015-04-01
Christodoulou and Rovelli have shown that black holes have large interiors that grow asymptotically linearly in advanced time, and speculated that this may be relevant to the information loss paradox. We show that there is no simple relation between the interior volume of an arbitrary black hole and its horizon area. That is, the volume enclosed is not necessarily a monotonically increasing function of the surface area.
A sliding mode control proposal for open-loop unstable processes.
Rojas, Rubén; Camacho, Oscar; González, Luis
2004-04-01
This papers presents a sliding mode controller based on a first-order-plus-dead-time model of the process for controlling open-loop unstable systems. The proposed controller has a simple and fixed structure with a set of tuning equations as a function of the desired performance. Both linear and nonlinear models were used to study the controller performance by computer simulations.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
Linear and Non-Linear Visual Feature Learning in Rat and Humans
Bossens, Christophe; Op de Beeck, Hans P.
2016-01-01
The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201
NASA Astrophysics Data System (ADS)
Smith, N.; Huang, A.; Weisz, E.; Annegarn, H. J.
2011-12-01
The Fast Linear Inversion Trace gas System (FLITS) is designed to retrieve tropospheric total column trace gas densities [molec.cm-2] from space-borne hyperspectral infrared soundings. The objective to develop a new retrieval scheme was motivated by the need for near real-time air quality monitoring at high spatial resolution. We present a case study of FLITS carbon monoxide (CO) retrievals from daytime (descending orbit) Infrared Atmospheric Sounding Interferometer (IASI) measurements that have a 0.5 cm-1 spectral resolution and 12 km footprint at nadir. The standard Level 2 IASI CO retrieval product (COL2) is available in near real-time but is spatially averaged over 2 x 2 pixels, or 50 x 50 km, and thus more suitable for global analysis. The study region is Southern Africa (south of the equator) for the period 28-31 August 2008. An atmospheric background estimate is obtained from a chemical transport model, emissivity from regional measurements and surface temperature (ST) from space-borne retrievals. The CO background error is set to 10%. FLITS retrieves CO by assuming a simple linear relationship between the IASI measurements and background estimate of the atmosphere and surface parameters. This differs from the COL2 algorithm that treats CO retrieval as a moderately non-linear problem. When compared to COL2, the FLITS retrievals display similar trends in distribution and transport of CO over time with the advantage of an improved spatial resolution (single-pixel). The value of the averaging kernel (A) is consistently above 0.5 and indicates that FLITS retrievals have a stable dependence on the measurement. This stability is achieved through careful channel selection in the strongest CO absorption lines (2050-2225 cm-1) and joint retrieval with skin temperature (IASI sensitivity to CO is highly correlated with ST), thus no spatial averaging is necessary. We conclude that the simplicity and stability of FLITS make it useful first as a research tool, i.e. the algorithm is easy to understand and computationally simple enough to run on most desktop computers, and second, as an operational tool that can calculate near real-time CO retrievals at instrument resolution for regional monitoring.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Structure of weakly 2-dependent siphons
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh; Chen, Jiun-Ting
2013-09-01
Deadlocks arising from insufficiently marked siphons in flexible manufacturing systems can be controlled by adding monitors to each siphon - too many for large systems. Li and Zhou add monitors to elementary siphons only while controlling the rest of (called dependent) siphons by adjusting control depth variables of elementary siphons. Only a linear number of monitors are required. The control of weakly dependent siphons (WDSs) is rather conservative since only positive terms were considered. The structure for strongly dependent siphons (SDSs) has been studied earlier. Based on this structure, the optimal sequence of adding monitors has been discovered earlier. Better controllability has been discovered to achieve faster and more permissive control. The results have been extended earlier to S3PGR2 (systems of simple sequential processes with general resource requirements). This paper explores the structures for WDSs, which, as found in this paper, involve elementary resource circuits interconnecting at more than (for SDSs, exactly) one resource place. This saves the time to compute compound siphons, their complementary sets and T-characteristic vectors. Also it allows us (1) to improve the controllability of WDSs and control siphons and (2) to avoid the time to find independent vectors for elementary siphons. We propose a sufficient and necessary test for adjusting control depth variables in S3PR (systems of simple sequential processes with resources) to avoid the sufficient-only time-consuming linear integer programming test (LIP) (Nondeterministic Polynomial (NP) time complete problem) required previously for some cases.
Linear Models for Systematics and Nuisances
NASA Astrophysics Data System (ADS)
Luger, Rodrigo; Foreman-Mackey, Daniel; Hogg, David W.
2017-12-01
The target of many astronomical studies is the recovery of tiny astrophysical signals living in a sea of uninteresting (but usually dominant) noise. In many contexts (i.e., stellar time-series, or high-contrast imaging, or stellar spectroscopy), there are structured components in this noise caused by systematic effects in the astronomical source, the atmosphere, the telescope, or the detector. More often than not, evaluation of the true physical model for these nuisances is computationally intractable and dependent on too many (unknown) parameters to allow rigorous probabilistic inference. Sometimes, housekeeping data---and often the science data themselves---can be used as predictors of the systematic noise. Linear combinations of simple functions of these predictors are often used as computationally tractable models that can capture the nuisances. These models can be used to fit and subtract systematics prior to investigation of the signals of interest, or they can be used in a simultaneous fit of the systematics and the signals. In this Note, we show that if a Gaussian prior is placed on the weights of the linear components, the weights can be marginalized out with an operation in pure linear algebra, which can (often) be made fast. We illustrate this model by demonstrating the applicability of a linear model for the non-linear systematics in K2 time-series data, where the dominant noise source for many stars is spacecraft motion and variability.
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
Multi-Mode Analysis of Dual Ridged Waveguide Systems for Material Characterization
2015-09-17
characterization is the process of determining the dielectric, magnetic, and magnetoelectric properties of a material. For simple (i.e., linear ...field expressions in terms of elementary functions (sines, cosines, exponentials and Bessel functions) and corresponding propagation constants of the...with material parameters 0 and µ0. • The MUT is simple ( linear , isotropic, homogeneous), and the sample has a uniform thickness. • The waveguide
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Speed-difficulty trade-off in speech: Chinese versus English
Sun, Yao; Latash, Elizaveta M.; Mikaelian, Irina L.
2011-01-01
This study continues the investigation of the previously described speed-difficulty trade-off in picture description tasks. In particular, we tested a hypothesis that the Mandarin Chinese and American English are similar in showing logarithmic dependences between speech time and index of difficulty (ID), while they differ significantly in the amount of time needed to describe simple pictures, this difference increases for more complex pictures, and it is associated with a proportional difference in the number of syllables used. Subjects (eight Chinese speakers and eight English speakers) were tested in pairs. One subject (the Speaker) described simple pictures, while the other subject (the Performer) tried to reproduce the pictures based on the verbal description as quickly as possible with a set of objects. The Chinese speakers initiated speech production significantly faster than the English speakers. Speech time scaled linearly with ln(ID) in all subjects, but the regression coefficient was significantly higher in the English speakers as compared with the Chinese speakers. The number of errors was somewhat lower in the Chinese participants (not significantly). The Chinese pairs also showed a shorter delay between the initiation of speech and initiation of action by the Performer, shorter movement time by the Performer, and shorter overall performance time. The number of syllables scaled with ID, and the Chinese speakers used significantly smaller numbers of syllables. Speech rate was comparable between the two groups, about 3 syllables/s; it dropped for more complex pictures (higher ID). When asked to reproduce the same pictures without speaking, movement time scaled linearly with ln(ID); the Chinese performers were slower than the English performers. We conclude that natural languages show a speed-difficulty trade-off similar to Fitts’ law; the trade-offs in movement and speech production are likely to originate at a cognitive level. The time advantage of the Chinese participants originates not from similarity of the simple pictures and Chinese written characters and not from more sloppy performance. It is linked to using fewer syllables to transmit the same information. We suggest that natural languages may differ by informational density defined as the amount of information transmitted by a given number of syllables. PMID:21479658
Simulating Eastern- and Central-Pacific Type ENSO Using a Simple Coupled Model
NASA Astrophysics Data System (ADS)
Fang, Xianghui; Zheng, Fei
2018-06-01
Severe biases exist in state-of-the-art general circulation models (GCMs) in capturing realistic central-Pacific (CP) El Niño structures. At the same time, many observational analyses have emphasized that thermocline (TH) feedback and zonal advective (ZA) feedback play dominant roles in the development of eastern-Pacific (EP) and CP El Niño-Southern Oscillation (ENSO), respectively. In this work, a simple linear air-sea coupled model, which can accurately depict the strength distribution of the TH and ZA feedbacks in the equatorial Pacific, is used to investigate these two types of El Niño. The results indicate that the model can reproduce the main characteristics of CP ENSO if the TH feedback is switched off and the ZA feedback is retained as the only positive feedback, confirming the dominant role played by ZA feedback in the development of CP ENSO. Further experiments indicate that, through a simple nonlinear control approach, many ENSO characteristics, including the existence of both CP and EP El Niño and the asymmetries between El Niño and La Niña, can be successfully captured using the simple linear air-sea coupled model. These analyses indicate that an accurate depiction of the climatological sea surface temperature distribution and the related ZA feedback, which are the subject of severe biases in GCMs, is very important in simulating a realistic CP El Niño.
A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Leonov, Arkady I.
2002-01-01
The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Bounding solutions of geometrically nonlinear viscoelastic problems
NASA Technical Reports Server (NTRS)
Stubstad, J. M.; Simitses, G. J.
1985-01-01
Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.
Bounding solutions of geometrically nonlinear viscoelastic problems
NASA Technical Reports Server (NTRS)
Stubstad, J. M.; Simitses, G. J.
1986-01-01
Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1987-01-01
The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1988-01-01
The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.
Linear analysis of time dependent properties of Child-Langmuir flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rokhlenko, A.
We continue our analysis of the time dependent behavior of the electron flow in the Child-Langmuir system, removing an approximation used earlier. We find a modified set of oscillatory decaying modes with frequencies of the same order as the inverse of the electron transient time. This range (typically MHz) allows simple experimental detection and maybe exploitation. We then study the time evolution of the current in response to a slow change of the anode voltage where the same modes of oscillations appear too. The cathode current in this case is systematically advanced or retarded depending on the direction of themore » voltage change.« less
Linear analysis of time dependent properties of Child-Langmuir flow
NASA Astrophysics Data System (ADS)
Rokhlenko, A.
2013-01-01
We continue our analysis of the time dependent behavior of the electron flow in the Child-Langmuir system, removing an approximation used earlier. We find a modified set of oscillatory decaying modes with frequencies of the same order as the inverse of the electron transient time. This range (typically MHz) allows simple experimental detection and maybe exploitation. We then study the time evolution of the current in response to a slow change of the anode voltage where the same modes of oscillations appear too. The cathode current in this case is systematically advanced or retarded depending on the direction of the voltage change.
A Dynamic Approach to Monitoring Particle Fallout in a Cleanroom Environment
NASA Technical Reports Server (NTRS)
Perry, Radford L., III
2010-01-01
This slide presentation discusses a mathematical model to monitor particle fallout in a cleanroom. "Cleanliness levels" do not lead to increases with regards to cleanroom type or time because the levels are not linear. Activity level, impacts the cleanroom class. The numerical method presented leads to a simple Class-hour formulation, that allows for dynamic monitoring of the particle using a standard air particle counter.
Did the ever dead outnumber the living and when? A birth-and-death approach
NASA Astrophysics Data System (ADS)
Avan, Jean; Grosjean, Nicolas; Huillet, Thierry
2015-02-01
This paper is an attempt to formalize analytically the question raised in 'World Population Explained: Do Dead People Outnumber Living, Or Vice Versa?' Huffington Post, Howard (2012). We start developing simple deterministic Malthusian growth models of the problem (with birth and death rates either constant or time-dependent) before running into both linear birth and death Markov chain models and age-structured models.
How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach
2009-12-01
Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of
A necessary condition for dispersal driven growth of populations with discrete patch dynamics.
Guiver, Chris; Packman, David; Townley, Stuart
2017-07-07
We revisit the question of when can dispersal-induced coupling between discrete sink populations cause overall population growth? Such a phenomenon is called dispersal driven growth and provides a simple explanation of how dispersal can allow populations to persist across discrete, spatially heterogeneous, environments even when individual patches are adverse or unfavourable. For two classes of mathematical models, one linear and one non-linear, we provide necessary conditions for dispersal driven growth in terms of the non-existence of a common linear Lyapunov function, which we describe. Our approach draws heavily upon the underlying positive dynamical systems structure. Our results apply to both discrete- and continuous-time models. The theory is illustrated with examples and both biological and mathematical conclusions are drawn. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Babaei, Behzad; Abramowitch, Steven D.; Elson, Elliot L.; Thomopoulos, Stavros; Genin, Guy M.
2015-01-01
The viscoelastic behaviour of a biological material is central to its functioning and is an indicator of its health. The Fung quasi-linear viscoelastic (QLV) model, a standard tool for characterizing biological materials, provides excellent fits to most stress–relaxation data by imposing a simple form upon a material's temporal relaxation spectrum. However, model identification is challenging because the Fung QLV model's ‘box’-shaped relaxation spectrum, predominant in biomechanics applications, can provide an excellent fit even when it is not a reasonable representation of a material's relaxation spectrum. Here, we present a robust and simple discrete approach for identifying a material's temporal relaxation spectrum from stress–relaxation data in an unbiased way. Our ‘discrete QLV’ (DQLV) approach identifies ranges of time constants over which the Fung QLV model's typical box spectrum provides an accurate representation of a particular material's temporal relaxation spectrum, and is effective at providing a fit to this model. The DQLV spectrum also reveals when other forms or discrete time constants are more suitable than a box spectrum. After validating the approach against idealized and noisy data, we applied the methods to analyse medial collateral ligament stress–relaxation data and identify the strengths and weaknesses of an optimal Fung QLV fit. PMID:26609064
Merli, Daniele; Zamboni, Daniele; Protti, Stefano; Pesavento, Maria; Profumo, Antonella
2014-12-01
Lysergic acid diethylamide (LSD) is hardly detectable and quantifiable in biological samples because of its low active dose. Although several analytical tests are available, routine analysis of this drug is rarely performed. In this article, we report a simple and accurate method for the determination of LSD, based on adsorptive stripping voltammetry in DMF/tetrabutylammonium perchlorate, with a linear range of 1-90 ng L(-1) for deposition times of 50s. LOD of 1.4 ng L(-1) and LOQ of 4.3 ng L(-1) were found. The method can be also applied to biological samples after a simple extraction with 1-chlorobutane. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Krempl, Erhard; Hong, Bor Zen
1989-01-01
A macromechanics analysis is presented for the in-plane, anisotropic time-dependent behavior of metal matrix laminates. The small deformation, orthotropic viscoplasticity theory based on overstress represents lamina behavior in a modified simple laminate theory. Material functions and constants can be identified in principle from experiments with laminae. Orthotropic invariants can be repositories for tension-compression asymmetry and for linear elasticity in one direction while the other directions behave in a viscoplastic manner. Computer programs are generated and tested for either unidirectional or symmetric laminates under in-plane loading. Correlations with the experimental results on metal matrix composites are presented.
FaCSI: A block parallel preconditioner for fluid-structure interaction in hemodynamics
NASA Astrophysics Data System (ADS)
Deparis, Simone; Forti, Davide; Grandperrin, Gwenol; Quarteroni, Alfio
2016-12-01
Modeling Fluid-Structure Interaction (FSI) in the vascular system is mandatory to reliably compute mechanical indicators in vessels undergoing large deformations. In order to cope with the computational complexity of the coupled 3D FSI problem after discretizations in space and time, a parallel solution is often mandatory. In this paper we propose a new block parallel preconditioner for the coupled linearized FSI system obtained after space and time discretization. We name it FaCSI to indicate that it exploits the Factorized form of the linearized FSI matrix, the use of static Condensation to formally eliminate the interface degrees of freedom of the fluid equations, and the use of a SIMPLE preconditioner for saddle-point problems. FaCSI is built upon a block Gauss-Seidel factorization of the FSI Jacobian matrix and it uses ad-hoc preconditioners for each physical component of the coupled problem, namely the fluid, the structure and the geometry. In the fluid subproblem, after operating static condensation of the interface fluid variables, we use a SIMPLE preconditioner on the reduced fluid matrix. Moreover, to efficiently deal with a large number of processes, FaCSI exploits efficient single field preconditioners, e.g., based on domain decomposition or the multigrid method. We measure the parallel performances of FaCSI on a benchmark cylindrical geometry and on a problem of physiological interest, namely the blood flow through a patient-specific femoropopliteal bypass. We analyze the dependence of the number of linear solver iterations on the cores count (scalability of the preconditioner) and on the mesh size (optimality).
Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping
NASA Technical Reports Server (NTRS)
Suresh, A.; Huynh, H. T.
1997-01-01
A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage
NASA Astrophysics Data System (ADS)
Cepowski, Tomasz
2017-06-01
The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.
Rotstein, Horacio G
2014-01-01
We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
How preservation time changes the linear viscoelastic properties of porcine liver.
Wex, C; Stoll, A; Fröhlich, M; Arndt, S; Lippert, H
2013-01-01
The preservation time of a liver graft is one of the crucial factors for the success of a liver transplantation. Grafts are kept in a preservation solution to delay cell destruction and cellular edema and to maximize organ function after transplantation. However, longer preservation times are not always avoidable. In this paper we focus on the mechanical changes of porcine liver with increasing preservation time, in order to establish an indicator for the quality of a liver graft dependent on preservation time. A time interval of 26 h was covered and the rheological properties of liver tissue studied using a stress-controlled rheometer. For samples of 1 h preservation time 0.8% strain was found as the limit of linear viscoelasticity. With increasing preservation time a decrease in the complex shear modulus as an indicator for stiffness was observed for the frequency range from 0.1 to 10 Hz. A simple fractional derivative representation of the Kelvin Voigt model was applied to gain further information about the changes of the mechanical properties of liver with increasing preservation time. Within the small shear rate interval of 0.0001-0.01 s⁻¹ the liver showed Newtonian-like flow behavior.
Lessons from Jurassic Park: patients as complex adaptive systems.
Katerndahl, David A
2009-08-01
With realization that non-linearity is generally the rule rather than the exception in nature, viewing patients and families as complex adaptive systems may lead to a better understanding of health and illness. Doctors who successfully practise the 'art' of medicine may recognize non-linear principles at work without having the jargon needed to label them. Complex adaptive systems are systems composed of multiple components that display complexity and adaptation to input. These systems consist of self-organized components, which display complex dynamics, ranging from simple periodicity to chaotic and random patterns showing trends over time. Understanding the non-linear dynamics of phenomena both internal and external to our patients can (1) improve our definition of 'health'; (2) improve our understanding of patients, disease and the systems in which they converge; (3) be applied to future monitoring systems; and (4) be used to possibly engineer change. Such a non-linear view of the world is quite congruent with the generalist perspective.
Gómez-Extremera, Manuel; Carpena, Pedro; Ivanov, Plamen Ch; Bernaola-Galván, Pedro A
2016-04-01
We systematically study the scaling properties of the magnitude and sign of the fluctuations in correlated time series, which is a simple and useful approach to distinguish between systems with different dynamical properties but the same linear correlations. First, we decompose artificial long-range power-law linearly correlated time series into magnitude and sign series derived from the consecutive increments in the original series, and we study their correlation properties. We find analytical expressions for the correlation exponent of the sign series as a function of the exponent of the original series. Such expressions are necessary for modeling surrogate time series with desired scaling properties. Next, we study linear and nonlinear correlation properties of series composed as products of independent magnitude and sign series. These surrogate series can be considered as a zero-order approximation to the analysis of the coupling of magnitude and sign in real data, a problem still open in many fields. We find analytical results for the scaling behavior of the composed series as a function of the correlation exponents of the magnitude and sign series used in the composition, and we determine the ranges of magnitude and sign correlation exponents leading to either single scaling or to crossover behaviors. Finally, we obtain how the linear and nonlinear properties of the composed series depend on the correlation exponents of their magnitude and sign series. Based on this information we propose a method to generate surrogate series with controlled correlation exponent and multifractal spectrum.
Simple colorimetric detection of doxycycline and oxytetracycline using unmodified gold nanoparticles
NASA Astrophysics Data System (ADS)
Li, Jie; Fan, Shumin; Li, Zhigang; Xie, Yuanzhe; Wang, Rui; Ge, Baoyu; Wu, Jing; Wang, Ruiyong
2014-08-01
The interaction between tetracycline antibiotics and gold nanoparticles was studied. With citrate-coated gold nanoparticles as colorimetric probe, a simple and rapid detection method for doxycycline and oxytetracycline has been developed. This method relies on the distance-dependent optical properties of gold nanoparticles. In weakly acidic buffer medium, doxycycline and oxytetracycline could rapidly induce the aggregation of gold nanoparticles, resulting in red-to-blue (or purple) colour change. The experimental parameters were optimized with regard to pH, the concentration of the gold nanoparticles and the reaction time. Under optimal experimental conditions, the linear range of the colorimetric sensor for doxycycline/oxytetracycline was 0.06-0.66 and 0.59-8.85 μg mL-1, respectively. The corresponding limit of detection for doxycycline and oxytetracycline was 0.0086 and 0.0838 μg mL-1, respectively. This assay was sensitive, selective, simple and readily used to detect tetracycline antibiotics in food products.
Cheng, Ryan R; Hawk, Alexander T; Makarov, Dmitrii E
2013-02-21
Recent experiments showed that the reconfiguration dynamics of unfolded proteins are often adequately described by simple polymer models. In particular, the Rouse model with internal friction (RIF) captures internal friction effects as observed in single-molecule fluorescence correlation spectroscopy (FCS) studies of a number of proteins. Here we use RIF, and its non-free draining analog, Zimm model with internal friction, to explore the effect of internal friction on the rate with which intramolecular contacts can be formed within the unfolded chain. Unlike the reconfiguration times inferred from FCS experiments, which depend linearly on the solvent viscosity, the first passage times to form intramolecular contacts are shown to display a more complex viscosity dependence. We further describe scaling relationships obeyed by contact formation times in the limits of high and low internal friction. Our findings provide experimentally testable predictions that can serve as a framework for the analysis of future studies of contact formation in proteins.
Aggregative Learning Method and Its Application for Communication Quality Evaluation
NASA Astrophysics Data System (ADS)
Akhmetov, Dauren F.; Kotaki, Minoru
2007-12-01
In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
Experimental study of the oscillation of spheres in an acoustic levitator.
Andrade, Marco A B; Pérez, Nicolás; Adamowski, Julio C
2014-10-01
The spontaneous oscillation of solid spheres in a single-axis acoustic levitator is experimentally investigated by using a high speed camera to record the position of the levitated sphere as a function of time. The oscillations in the axial and radial directions are systematically studied by changing the sphere density and the acoustic pressure amplitude. In order to interpret the experimental results, a simple model based on a spring-mass system is applied in the analysis of the sphere oscillatory behavior. This model requires the knowledge of the acoustic pressure distribution, which was obtained numerically by using a linear finite element method (FEM). Additionally, the linear acoustic pressure distribution obtained by FEM was compared with that measured with a laser Doppler vibrometer. The comparison between numerical and experimental pressure distributions shows good agreement for low values of pressure amplitude. When the pressure amplitude is increased, the acoustic pressure distribution becomes nonlinear, producing harmonics of the fundamental frequency. The experimental results of the spheres oscillations for low pressure amplitudes are consistent with the results predicted by the simple model based on a spring-mass system.
Flight evaluation of a simple total energy-rate system with potential wind-shear application
NASA Technical Reports Server (NTRS)
Ostroff, A. J.; Hueschen, R. M.; Hellbaum, R. F.; Creedon, J. F.
1981-01-01
Wind shears can create havoc during aircraft terminal area operations and have been cited as the primary cause of several major aircraft accidents. A simple sensor, potentially having application to the wind-shear problem, was developed to rapidly measure aircraft total energy relative to the air mass. Combining this sensor with either a variometer or a rate-of-climb indicator provides a total energy-rate system which was successfully applied in soaring flight. The measured rate of change of aircraft energy can potentially be used on display/control systems of powered aircraft to reduce glide-slope deviations caused by wind shear. The experimental flight configuration and evaluations of the energy-rate system are described. Two mathematical models are developed: the first describes operation of the energy probe in a linear design region and the second model is for the nonlinear region. The calculated total rate is compared with measured signals for many different flight tests. Time history plots show the tow curves to be almost the same for the linear operating region and very close for the nonlinear region.
Extending the range of real time density matrix renormalization group simulations
NASA Astrophysics Data System (ADS)
Kennes, D. M.; Karrasch, C.
2016-03-01
We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...
2015-11-12
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices
NASA Astrophysics Data System (ADS)
Passemier, Damien; McKay, Matthew R.; Chen, Yang
2015-07-01
Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.
Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis.
Liu, Jeng-Cheng; Cheng, Yuang-Tung; Hung, Hsien-Sen
2018-01-19
Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. The structure of this ULA based on micro-electro-mechanical systems (MEMS) technology, and thus has attractive features of small size, high sensitivity and low cost, and is suitable for Autonomous Underwater Vehicle (AUV) operations. This proposed target localization method has the following advantages: only a single snapshot of data is needed and real-time processing is feasible. The proposed algorithm transforms a very complicated nonlinear estimation problem to a simple nearly linear one via time-frequency distribution (TFD) theory and is verified with HHT. Theoretical discussions of resolution issue are also provided to facilitate the design of a MEMS sensor with high sensitivity. Simulation results are shown to verify the effectiveness of the proposed method.
Ocean rogue waves and their phase space dynamics in the limit of a linear interference model.
Birkholz, Simon; Brée, Carsten; Veselić, Ivan; Demircan, Ayhan; Steinmeyer, Günter
2016-10-12
We reanalyse the probability for formation of extreme waves using the simple model of linear interference of a finite number of elementary waves with fixed amplitude and random phase fluctuations. Under these model assumptions no rogue waves appear when less than 10 elementary waves interfere with each other. Above this threshold rogue wave formation becomes increasingly likely, with appearance frequencies that may even exceed long-term observations by an order of magnitude. For estimation of the effective number of interfering waves, we suggest the Grassberger-Procaccia dimensional analysis of individual time series. For the ocean system, it is further shown that the resulting phase space dimension may vary, such that the threshold for rogue wave formation is not always reached. Time series analysis as well as the appearance of particular focusing wind conditions may enable an effective forecast of such rogue-wave prone situations. In particular, extracting the dimension from ocean time series allows much more specific estimation of the rogue wave probability.
Ocean rogue waves and their phase space dynamics in the limit of a linear interference model
Birkholz, Simon; Brée, Carsten; Veselić, Ivan; Demircan, Ayhan; Steinmeyer, Günter
2016-01-01
We reanalyse the probability for formation of extreme waves using the simple model of linear interference of a finite number of elementary waves with fixed amplitude and random phase fluctuations. Under these model assumptions no rogue waves appear when less than 10 elementary waves interfere with each other. Above this threshold rogue wave formation becomes increasingly likely, with appearance frequencies that may even exceed long-term observations by an order of magnitude. For estimation of the effective number of interfering waves, we suggest the Grassberger-Procaccia dimensional analysis of individual time series. For the ocean system, it is further shown that the resulting phase space dimension may vary, such that the threshold for rogue wave formation is not always reached. Time series analysis as well as the appearance of particular focusing wind conditions may enable an effective forecast of such rogue-wave prone situations. In particular, extracting the dimension from ocean time series allows much more specific estimation of the rogue wave probability. PMID:27731411
Linear dynamical modes as new variables for data-driven ENSO forecast
NASA Astrophysics Data System (ADS)
Gavrilov, Andrey; Seleznev, Aleksei; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander; Kurths, Juergen
2018-05-01
A new data-driven model for analysis and prediction of spatially distributed time series is proposed. The model is based on a linear dynamical mode (LDM) decomposition of the observed data which is derived from a recently developed nonlinear dimensionality reduction approach. The key point of this approach is its ability to take into account simple dynamical properties of the observed system by means of revealing the system's dominant time scales. The LDMs are used as new variables for empirical construction of a nonlinear stochastic evolution operator. The method is applied to the sea surface temperature anomaly field in the tropical belt where the El Nino Southern Oscillation (ENSO) is the main mode of variability. The advantage of LDMs versus traditionally used empirical orthogonal function decomposition is demonstrated for this data. Specifically, it is shown that the new model has a competitive ENSO forecast skill in comparison with the other existing ENSO models.
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
NASA Astrophysics Data System (ADS)
Droghei, Riccardo; Salusti, Ettore
2013-04-01
Control of drilling parameters, as fluid pressure, mud weight, salt concentration is essential to avoid instabilities when drilling through shale sections. To investigate shale deformation, fundamental for deep oil drilling and hydraulic fracturing for gas extraction ("fracking"), a non-linear model of mechanic and chemo-poroelastic interactions among fluid, solute and the solid matrix is here discussed. The two equations of this model describe the isothermal evolution of fluid pressure and solute density in a fluid saturated porous rock. Their solutions are quick non-linear Burger's solitary waves, potentially destructive for deep operations. In such analysis the effect of diffusion, that can play a particular role in fracking, is investigated. Then, following Civan (1998), both diffusive and shock waves are applied to fine particles filtration due to such quick transients , their effect on the adjacent rocks and the resulting time-delayed evolution. Notice how time delays in simple porous media dynamics have recently been analyzed using a fractional derivative approach. To make a tentative comparison of these two deeply different methods,in our model we insert fractional time derivatives, i.e. a kind of time-average of the fluid-rocks interactions. Then the delaying effects of fine particles filtration is compared with fractional model time delays. All this can be seen as an empirical check of these fractional models.
Relaxation approximation in the theory of shear turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
1995-01-01
Leslie's perturbative treatment of the direct interaction approximation for shear turbulence (Modern Developments in the Theory of Turbulence, 1972) is applied to derive a time dependent model for the Reynolds stresses. The stresses are decomposed into tensor components which satisfy coupled linear relaxation equations; the present theory therefore differs from phenomenological Reynolds stress closures in which the time derivatives of the stresses are expressed in terms of the stresses themselves. The theory accounts naturally for the time dependence of the Reynolds normal stress ratios in simple shear flow. The distortion of wavenumber space by the mean shear plays a crucial role in this theory.
Statistical mechanics of broadcast channels using low-density parity-check codes.
Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David
2003-03-01
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Neuronal and network computation in the brain
NASA Astrophysics Data System (ADS)
Babloyantz, A.
1999-03-01
The concepts and methods of non-linear dynamics have been a powerful tool for studying some gamow aspects of brain dynamics. In this paper we show how, from time series analysis of electroencepholograms in sick and healthy subjects, chaotic nature of brain activity could be unveiled. This finding gave rise to the concept of spatiotemporal cortical chaotic networks which in turn was the foundation for a simple brain-like device which is able to become attentive, perform pattern recognition and motion detection. A new method of time series analysis is also proposed which demonstrates for the first time the existence of neuronal code in interspike intervals of coclear cells.
Robust consensus control with guaranteed rate of convergence using second-order Hurwitz polynomials
NASA Astrophysics Data System (ADS)
Fruhnert, Michael; Corless, Martin
2017-10-01
This paper considers homogeneous networks of general, linear time-invariant, second-order systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilisable. We show that consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. To achieve this, we provide a new and simple derivation of the conditions for a second-order polynomial with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
Linear response theory for annealing of radiation damage in semiconductor devices
NASA Technical Reports Server (NTRS)
Litovchenko, Vitaly
1988-01-01
A theoretical study of the radiation/annealing response of MOS ICs is described. Although many experiments have been performed in this field, no comprehensive theory dealing with radiation/annealing response has been proposed. Many attempts have been made to apply linear response theory, but no theoretical foundation has been presented. The linear response theory outlined here is capable of describing a broad area of radiation/annealing response phenomena in MOS ICs, in particular, both simultaneous irradiation and annealing, as well as short- and long-term annealing, including the case when annealing is nearing completion. For the first time, a simple procedure is devised to determine the response function from experimental radiation/annealing data. In addition, this procedure enables us to study the effect of variable temperature and dose rate, effects which are of interest in spaceflight. In the past, the shift in threshold potential due to radiation/annealing has usually been assumed to depend on one variable: the time lapse between an impulse dose and the time of observation. While such a suggestion of uniformity in time is certainly true for a broad range of radiation annealing phenomena, it may not hold for some ranges of the variables of interest (temperature, dose rate, etc.). A response function is projected which is dependent on two variables: the time of observation and the time of the impulse dose. This dependence on two variables allows us to extend the theory to the treatment of a variable dose rate. Finally, the linear theory is generalized to the case in which the response is nonlinear with impulse dose, but is proportional to some impulse function of dose. A method to determine both the impulse and response functions is presented.
NASA Astrophysics Data System (ADS)
Gottwald, Georg A.; Wormell, J. P.; Wouters, Jeroen
2016-09-01
Using a sensitive statistical test we determine whether or not one can detect the breakdown of linear response given observations of deterministic dynamical systems. A goodness-of-fit statistics is developed for a linear statistical model of the observations, based on results for central limit theorems for deterministic dynamical systems, and used to detect linear response breakdown. We apply the method to discrete maps which do not obey linear response and show that the successful detection of breakdown depends on the length of the time series, the magnitude of the perturbation and on the choice of the observable. We find that in order to reliably reject the assumption of linear response for typical observables sufficiently large data sets are needed. Even for simple systems such as the logistic map, one needs of the order of 106 observations to reliably detect the breakdown with a confidence level of 95 %; if less observations are available one may be falsely led to conclude that linear response theory is valid. The amount of data required is larger the smaller the applied perturbation. For judiciously chosen observables the necessary amount of data can be drastically reduced, but requires detailed a priori knowledge about the invariant measure which is typically not available for complex dynamical systems. Furthermore we explore the use of the fluctuation-dissipation theorem (FDT) in cases with limited data length or coarse-graining of observations. The FDT, if applied naively to a system without linear response, is shown to be very sensitive to the details of the sampling method, resulting in erroneous predictions of the response.
Wang, Libing; Chen, Wei; Xu, Dinghua; Shim, Bong Sup; Zhu, Yingyue; Sun, Fengxia; Liu, Liqiang; Peng, Chifang; Jin, Zhengyu; Xu, Chuanlai; Kotov, Nicholas A.
2009-01-01
Safety of water was for a long time and still is one of the most pressing needs for many countries and different communities. Despite the fact that there are potentially many methods to evaluate water safety, finding a simple, rapid, versatile, and inexpensive method for detection of toxins in everyday items is still a great challenge. In this study, we extend the concept of composites obtained impregnation of porous fibrous materials, such as fabrics and papers, by single walled carbon-nanotubes (SWNTs) toward very simple but high-performance biosensors. They utilize the strong dependence of electrical conductivity through nanotubes percolation network on the width of nanotubes-nanotube tunneling gap and can potentially satisfy all the requirements outlined above for the routine toxin monitoring. An antibody to the microcystin-LR (MC-LR), one of the common culprits in mass poisonings, was dispersed together with SWNTs. This dispersion was used to dip-coat the paper rendering it conductive. The change in conductivity of the paper was used to sense the MC-LR in the water rapidly and accurately. The method has the linear detection range up to 10 nmol/L and non-linear detection up to 40 nmol/L. The limit of detection was found to be 0.6 nmol/L (0.6 ng/mL), which satisfies the strictest World Health Organization standard for MC-LR content in drinking water (1 ng/mL), and is comparable to the detection limit of traditional ELISA method of MC-LR detection, while drastically reducing the time of analysis by more than an order of magnitude, which is one of the major hurdles in practical applications. Similar technology of sensor preparation can also be used for a variety of other rapid environmental sensors. PMID:19928776
Scaling up digital circuit computation with DNA strand displacement cascades.
Qian, Lulu; Winfree, Erik
2011-06-03
To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.
McAlpine, D; Jiang, D; Shackleton, T M; Palmer, A R
1998-08-01
Responses of low-frequency neurons in the inferior colliculus (IC) of anesthetized guinea pigs were studied with binaural beats to assess their mean best interaural phase (BP) to a range of stimulating frequencies. Phase plots (stimulating frequency vs BP) were produced, from which measures of characteristic delay (CD) and characteristic phase (CP) for each neuron were obtained. The CD provides an estimate of the difference in travel time from each ear to coincidence-detector neurons in the brainstem. The CP indicates the mechanism underpinning the coincidence detector responses. A linear phase plot indicates a single, constant delay between the coincidence-detector inputs from the two ears. In more than half (54 of 90) of the neurons, the phase plot was not linear. We hypothesized that neurons with nonlinear phase plots received convergent input from brainstem coincidence detectors with different CDs. Presentation of a second tone with a fixed, unfavorable delay suppressed the response of one input, linearizing the phase plot and revealing other inputs to be relatively simple coincidence detectors. For some neurons with highly complex phase plots, the suppressor tone altered BP values, but did not resolve the nature of the inputs. For neurons with linear phase plots, the suppressor tone either completely abolished their responses or reduced their discharge rate with no change in BP. By selectively suppressing inputs with a second tone, we are able to reveal the nature of underlying binaural inputs to IC neurons, confirming the hypothesis that the complex phase plots of many IC neurons are a result of convergence from simple brainstem coincidence detectors.
Tseng, Huan-Chang; Wu, Jiann-Shing; Chang, Rong-Yeu
2010-04-28
A small amplitude oscillatory shear flows with the classic characteristic of a phase shift when using non-equilibrium molecular dynamics simulations for n-hexadecane fluids. In a suitable range of strain amplitude, the fluid possesses significant linear viscoelastic behavior. Non-linear viscoelastic behavior of strain thinning, which means the dynamic modulus monotonously decreased with increasing strain amplitudes, was found at extreme strain amplitudes. Under isobaric conditions, different temperatures strongly affected the range of linear viscoelasticity and the slope of strain thinning. The fluid's phase states, containing solid-, liquid-, and gel-like states, can be distinguished through a criterion of the viscoelastic spectrum. As a result, a particular condition for the viscoelastic behavior of n-hexadecane molecules approaching that of the Rouse chain was obtained. Besides, more importantly, evidence of thermorheologically simple materials was presented in which the relaxation modulus obeys the time-temperature superposition principle. Therefore, using shift factors from the time-temperature superposition principle, the estimated Arrhenius flow activation energy was in good agreement with related experimental values. Furthermore, one relaxation modulus master curve well exhibited both transition and terminal zones. Especially regarding non-equilibrium thermodynamic states, variations in the density, with respect to frequencies, were revealed.
Propagating synchrony in feed-forward networks
Jahnke, Sven; Memmesheimer, Raoul-Martin; Timme, Marc
2013-01-01
Coordinated patterns of precisely timed action potentials (spikes) emerge in a variety of neural circuits but their dynamical origin is still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains) may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of non-linear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons. PMID:24298251
The electrical response of turtle cones to flashes and steps of light.
Baylor, D A; Hodgkin, A L; Lamb, T D
1974-11-01
1. The linear response of turtle cones to weak flashes or steps of light was usually well fitted by equations based on a chain of six or seven reactions with time constants varying over about a 6-fold range.2. The temperature coefficient (Q(10)) of the reciprocal of the time to peak of the response to a flash was 1.8 (15-25 degrees C), corresponding to an activation energy of 10 kcal/mole.3. Electrical measurements with one internal electrode and a balancing circuit gave the following results on red-sensitive cones of high resistance: resistance across cell surface in dark 50-170 MOmega; time constant in dark 4-6.5 msec. The effect of a bright light was to increase the resistance and time constant by 10-30%.4. If the cell time constant, resting potential and maximum hyperpolarization are known, the fraction of ionic channels blocked by light at any instant can be calculated from the hyperpolarization and its rate of change. At times less than 50 msec the shape of this relation is consistent with the idea that the concentration of a blocking molecule which varies linearly with light intensity is in equilibrium with the fraction of ionic channels blocked.5. The rising phase of the response to flashes and steps of light covering a 10(5)-fold range of intensities is well fitted by a theory in which the essential assumptions are that (i) light starts a linear chain of reactions leading to the production of a substance which blocks ionic channels in the outer segment, (ii) an equilibrium between the blocking molecules and unblocked channels is established rapidly, and (iii) the electrical properties of the cell can be represented by a simple circuit with a time constant in the dark of about 6 msec.6. Deviations from the simple theory which occur after 50 msec are attributed partly to a time-dependent desensitization mechanism and partly to a change in saturation potential resulting from a voltage-dependent change in conductance.7. The existence of several components in the relaxation of the potential to its resting level can be explained by supposing that the ;substance' which blocks light sensitive ionic channels is inactivated in a series of steps.
ERIC Educational Resources Information Center
Unsal, Yasin
2011-01-01
One of the subjects that is confusing and difficult for students to fully comprehend is the concept of angular velocity and linear velocity. It is the relationship between linear and angular velocity that students find difficult; most students understand linear motion in isolation. In this article, we detail the design, construction and…
Relativistic initial conditions for N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Tram, Thomas; Crittenden, Robert
2017-06-01
Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less
Incorporating inductances in tissue-scale models of cardiac electrophysiology
NASA Astrophysics Data System (ADS)
Rossi, Simone; Griffith, Boyce E.
2017-09-01
In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.
NASA Astrophysics Data System (ADS)
Wawerzinek, B.; Ritter, J. R. R.; Roy, C.
2013-08-01
We analyse travel times of shear waves, which were recorded at the MAGNUS network, to determine the 3D shear wave velocity (vS) structure underneath Southern Scandinavia. The travel time residuals are corrected for the known crustal structure of Southern Norway and weighted to account for data quality and pick uncertainties. The resulting residual pattern of subvertically incident waves is very uniform and simple. It shows delayed arrivals underneath Southern Norway compared to fast arrivals underneath the Oslo Graben and the Baltic Shield. The 3D upper mantle vS structure underneath the station network is determined by performing non-linear travel time tomography. As expected from the residual pattern the resulting tomographic model shows a simple and continuous vS perturbation pattern: a negative vS anomaly is visible underneath Southern Norway relative to the Baltic Shield in the east with a contrast of up to 4% vS and a sharp W-E dipping transition zone. Reconstruction tests reveal besides vertical smearing a good lateral reconstruction of the dipping vS transition zone and suggest that a deep-seated anomaly at 330-410 km depth is real and not an inversion artefact. The upper part of the reduced vS anomaly underneath Southern Norway (down to 250 km depth) might be due to an increase in lithospheric thickness from the Caledonian Southern Scandes in the west towards the Proterozoic Baltic Shield in Sweden in the east. The deeper-seated negative vS anomaly (330-410 km depth) could be caused by a temperature anomaly possibly combined with effects due to fluids or hydrous minerals. The determined simple 3D vS structure underneath Southern Scandinavia indicates that mantle processes might influence and contribute to a Neogene uplift of Southern Norway.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-01-01
Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393
[Contents mensuration of total alkaloid in Uncaria rhynchophylla by acid dye colorimetry].
Zeng, Chang-qing; Luo, Bei-liang
2007-08-01
To investigate the method of determination of total alkaloids Uncaria rhynchophylla. The Contents of total Alkaloid were determined by Acid dye Colorimetry. Acid dye color conditions: pH3.6 buffer 5.0 ml, bromocresol green liquid 5.0 ml; chloroform extraction three times, each time was exeracted for 2 minutes, put it aside for at least 5 minutes for the determination of the best method. Rhynchophylline 6.018 microg - 108.324 microg in the linear range, Recoveriys rate was 97.19%, RSD was 1.34% (n = 6). The method is simple, highly sensitive and reproducible.
Virtual Design of a Controller for a Hydraulic Cam Phasing System
NASA Astrophysics Data System (ADS)
Schneider, Markus; Ulbrich, Heinz
2010-09-01
Hydraulic vane cam phasing systems are nowadays widely used for improving the performance of combustion engines. At stationary operation, these systems should achieve a constant phasing angle, which however is badly disturbed by the alternating torque generated by the valve actuation. As the hydraulic system shows a non-linear characteristic over the full operation range and the inductivity of the hydraulic pipes generates a significant time delay, a full model based control emerges very complex. Therefore a simple feed-forward controller is designed, bridging the time delay of the hydraulic system and improving the system behaviour significantly.
On some stochastic formulations and related statistical moments of pharmacokinetic models.
Matis, J H; Wehrly, T E; Metzler, C M
1983-02-01
This paper presents the deterministic and stochastic model for a linear compartment system with constant coefficients, and it develops expressions for the mean residence times (MRT) and the variances of the residence times (VRT) for the stochastic model. The expressions are relatively simple computationally, involving primarily matrix inversion, and they are elegant mathematically, in avoiding eigenvalue analysis and the complex domain. The MRT and VRT provide a set of new meaningful response measures for pharmacokinetic analysis and they give added insight into the system kinetics. The new analysis is illustrated with an example involving the cholesterol turnover in rats.
Modeling, system identification, and control of ASTREX
NASA Technical Reports Server (NTRS)
Abhyankar, Nandu S.; Ramakrishnan, J.; Byun, K. W.; Das, A.; Cossey, Derek F.; Berg, J.
1993-01-01
The modeling, system identification and controller design aspects of the ASTREX precision space structure are presented in this work. Modeling of ASTREX is performed using NASTRAN, TREETOPS and I-DEAS. The models generated range from simple linear time-invariant models to nonlinear models used for large angle simulations. Identification in both the time and frequency domains are presented. The experimental set up and the results from the identification experiments are included. Finally, controller design for ASTREX is presented. Simulation results using this optimal controller demonstrate the controller performance. Finally the future directions and plans for the facility are addressed.
A nonradioactive assay for poly(a)-specific ribonuclease activity by methylene blue colorimetry.
Cheng, Yuan; Liu, Wei-Feng; Yan, Yong-Bin; Zhou, Hai-Meng
2006-01-01
A simple nonradioactive assay, which was based on the specific shift of the absorbance maximum of methylene blue induced by its intercalation into poly(A) molecules, was developed for poly(A)-specific ribonuclease (PARN). A good linear relationship was found between the absorbance at 662 nm and the poly(A) concentration. The assay conditions, including the concentration of methylene blue, the incubation temperature and time, and the poly(A) concentration were evaluated and optimized.
Simplified dichromated gelatin hologram recording process
NASA Technical Reports Server (NTRS)
Georgekutty, Tharayil G.; Liu, Hua-Kuang
1987-01-01
A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.
Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Corless, Martin
2004-01-01
We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Dynamics and thermodynamics of linear quantum open systems.
Martinez, Esteban A; Paz, Juan Pablo
2013-03-29
We analyze the evolution of the quantum state of networks of quantum oscillators coupled with arbitrary external environments. We show that the reduced density matrix of the network always obeys a local master equation with a simple analytical solution. We use this to study the emergence of thermodynamical laws in the long time regime demonstrating two main results: First, we show that it is impossible to build a quantum absorption refrigerator using linear networks (thus, nonlinearity is an essential resource for such refrigerators recently studied by Levy and Kosloff [Phys. Rev. Lett. 108, 070604 (2012)] and Levy et al. [Phys. Rev. B 85, 061126 (2012)]). Then, we show that the third law imposes constraints on the low frequency behavior of the environmental spectral densities.
NASA Astrophysics Data System (ADS)
Kamikubo, Takashi; Ohnishi, Takayuki; Hara, Shigehiro; Anze, Hirohito; Hattori, Yoshiaki; Tamamushi, Shuichi; Bai, Shufeng; Wang, Jen-Shiang; Howell, Rafael; Chen, George; Li, Jiangwei; Tao, Jun; Wiley, Jim; Kurosawa, Terunobu; Saito, Yasuko; Takigawa, Tadahiro
2010-09-01
In electron beam writing on EUV mask, it has been reported that CD linearity does not show simple signatures as observed with conventional COG (Cr on Glass) masks because they are caused by scattered electrons form EUV mask itself which comprises stacked heavy metals and thick multi-layers. To resolve this issue, Mask Process Correction (MPC) will be ideally applicable. Every pattern is reshaped in MPC. Therefore, the number of shots would not increase and writing time will be kept within reasonable range. In this paper, MPC is extended to modeling for correction of CD linearity errors on EUV mask. And its effectiveness is verified with simulations and experiments through actual writing test.
NASA Astrophysics Data System (ADS)
BOERTJENS, G. J.; VAN HORSSEN, W. T.
2000-08-01
In this paper an initial-boundary value problem for the vertical displacement of a weakly non-linear elastic beam with an harmonic excitation in the horizontal direction at the ends of the beam is studied. The initial-boundary value problem can be regarded as a simple model describing oscillations of flexible structures like suspension bridges or iced overhead transmission lines. Using a two-time-scales perturbation method an approximation of the solution of the initial-boundary value problem is constructed. Interactions between different oscillation modes of the beam are studied. It is shown that for certain external excitations, depending on the phase of an oscillation mode, the amplitude of specific oscillation modes changes.
Surface Plasmon Coupling and Control Using Spherical Cap Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Yu; Joly, Alan G.; Zhang, Xin
2017-06-05
Propagating surface plasmons (PSPs) launched from a protruded silver spherical cap structure are investigated using photoemission electron microscopy (PEEM) and finite difference time domain (FDTD) calculations. Our combined experimental and theoretical findings reveal that PSP coupling efficiency is comparable to conventional etched-in plasmonic coupling structures. Additionally, plasmon propagation direction can be varied by a linear rotation of the driving laser polarization. A simple geometric model is proposed in which the plasmon direction selectivity is proportional to the projection of the linear laser polarization on the surface normal. An application for the spherical cap coupler as a gate device is proposed.more » Overall, our results indicate that protruded cap structures hold great promise as elements in emerging surface plasmon applications.« less
Reduction of a linear complex model for respiratory system during Airflow Interruption.
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.
A Simple Demonstration of Atomic and Molecular Orbitals Using Circular Magnets
ERIC Educational Resources Information Center
Chakraborty, Maharudra; Mukhopadhyay, Subrata; Das, Ranendu Sekhar
2014-01-01
A quite simple and inexpensive technique is described here to represent the approximate shapes of atomic orbitals and the molecular orbitals formed by them following the principles of the linear combination of atomic orbitals (LCAO) method. Molecular orbitals of a few simple molecules can also be pictorially represented. Instructors can employ the…
Simple linear and multivariate regression models.
Rodríguez del Águila, M M; Benítez-Parejo, N
2011-01-01
In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
Effects of the observed J2 variations on the Earth's precession and nutation
NASA Astrophysics Data System (ADS)
Ferrándiz, José M.; Baenas, Tomás; Belda, Santiago
2016-04-01
The Earth's oblateness parameter J2 is closely related to the dynamical ellipticity H, which factorizes the main components of the precession and the different nutation terms. In most theoretical approaches to the Earth's rotation, with IAU2000 nutation theory among them, H is assumed to be constant. The precession model IAU2006 supposes H to have a conventional linear variation, based on the J2 time series derived mainly from satellite laser ranging (SLR) data for decades, which gives rise to an additional quadratic term of the precession in longitude and some corrections of the nutation terms. The time evolution of J2 is, however, too complex to be well approximated by a simple linear model. The effect of more general models including periodic terms and closer to the observed time series, although still unable to reproduce a significant part of the signal, has been seldom investigated. In this work we address the problem of deriving the effect of the observed J2 variations without resorting to such simplified models. The Hamiltonian approach to the Earth rotation is extended to allow the McCullagh's term of the potential to depend on a time-varying oblateness. An analytical solution is derived by means of a suitable perturbation method in the case of the time series provided by the Center for Space Research (CSR) of the University of Texas, which results in non-negligible contributions to the precession-nutation angles. The presentation focuses on the main effects on the longitude of the equator; a noticeable non-linear trend is superimposed to the linear main precession term, along with some periodic and decadal variations.
Using crosscorrelation techniques to determine the impulse response of linear systems
NASA Technical Reports Server (NTRS)
Dallabetta, Michael J.; Li, Harry W.; Demuth, Howard B.
1993-01-01
A crosscorrelation method of measuring the impulse response of linear systems is presented. The technique, implementation, and limitations of this method are discussed. A simple system is designed and built using discrete components and the impulse response of a linear circuit is measured. Theoretical and software simulation results are presented.
On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman
2016-04-01
The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.
Chan, Roger W; Rodriguez, Maritza L
2008-08-01
Previous studies reporting the linear viscoelastic shear properties of the human vocal fold cover or mucosa have been based on torsional rheometry, with measurements limited to low audio frequencies, up to around 80 Hz. This paper describes the design and validation of a custom-built, controlled-strain, linear, simple-shear rheometer system capable of direct empirical measurements of viscoelastic shear properties at phonatory frequencies. A tissue specimen was subjected to simple shear between two parallel, rigid acrylic plates, with a linear motor creating a translational sinusoidal displacement of the specimen via the upper plate, and the lower plate transmitting the harmonic shear force resulting from the viscoelastic response of the specimen. The displacement of the specimen was measured by a linear variable differential transformer whereas the shear force was detected by a piezoelectric transducer. The frequency response characteristics of these system components were assessed by vibration experiments with accelerometers. Measurements of the viscoelastic shear moduli (G' and G") of a standard ANSI S2.21 polyurethane material and those of human vocal fold cover specimens were made, along with estimation of the system signal and noise levels. Preliminary results showed that the rheometer can provide valid and reliable rheometric data of vocal fold lamina propria specimens at frequencies of up to around 250 Hz, well into the phonatory range.
The relationship between psychological distress and baseline sports-related concussion testing.
Bailey, Christopher M; Samples, Hillary L; Broshek, Donna K; Freeman, Jason R; Barth, Jeffrey T
2010-07-01
This study examined the effect of psychological distress on neurocognitive performance measured during baseline concussion testing. Archival data were utilized to examine correlations between personality testing and computerized baseline concussion testing. Significantly correlated personality measures were entered into linear regression analyses, predicting baseline concussion testing performance. Suicidal ideation was examined categorically. Athletes underwent testing and screening at a university athletic training facility. Participants included 47 collegiate football players 17 to 19 years old, the majority of whom were in their first year of college. Participants were administered the Concussion Resolution Index (CRI), an internet-based neurocognitive test designed to monitor and manage both at-risk and concussed athletes. Participants took the Personality Assessment Inventory (PAI), a self-administered inventory designed to measure clinical syndromes, treatment considerations, and interpersonal style. Scales and subscales from the PAI were utilized to determine the influence psychological distress had on the CRI indices: simple reaction time, complex reaction time, and processing speed. Analyses revealed several significant correlations among aspects of somatic concern, depression, anxiety, substance abuse, and suicidal ideation and CRI performance, each with at least a moderate effect. When entered into a linear regression, the block of combined psychological symptoms accounted for a significant amount of baseline CRI performance, with moderate to large effects (r = 0.23-0.30). When examined categorically, participants with suicidal ideation showed significantly slower simple reaction time and complex reaction time, with a similar trend on processing speed. Given the possibility of obscured concussion deficits after injury, implications for premature return to play, and the need to target psychological distress outright, these findings heighten the clinical importance of screening for psychological distress during baseline and post-injury concussion evaluations.
He, ZeFang; Zhao, Long
2014-01-01
An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement.
Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald
2012-01-01
Purpose The aims of the present study were to investigate (i) the changes in participation and performance and (ii) the gender difference in Triple Iron ultra-triathlon (11.4 km swimming, 540 km cycling and 126.6 km running) across years from 1988 to 2011. Methods For the cross-sectional data analysis, the association between with overall race times and split times was investigated using simple linear regression analyses and analysis of variance. For the longitudinal data analysis, the changes in race times for the five men and women with the highest number of participations were analysed using simple linear regression analyses. Results During the studied period, the number of finishers were 824 (71.4%) for men and 80 (78.4%) for women. Participation increased for men (r 2=0.27, P<0.01) while it remained stable for women (8%). Total race times were 2,146 ± 127.3 min for men and 2,615 ± 327.2 min for women (P<0.001). Total race time decreased for men (r 2=0.17; P=0.043), while it increased for women (r 2=0.49; P=0.001) across years. The gender difference in overall race time for winners increased from 10% in 1992 to 42% in 2011 (r 2=0.63; P<0.001). The longitudinal analysis of the five women and five men with the highest number of participations showed that performance decreased in one female (r 2=0.45; P=0.01). The four other women as well as all five men showed no change in overall race times across years. Conclusions Participation increased and performance improved for male Triple Iron ultra-triathletes while participation remained unchanged and performance decreased for females between 1988 and 2011. The reasons for the increase of the gap between female and male Triple Iron ultra-triathletes need further investigations. PMID:23012633
Multiphysics modeling of non-linear laser-matter interactions for optically active semiconductors
NASA Astrophysics Data System (ADS)
Kraczek, Brent; Kanp, Jaroslaw
Development of photonic devices for sensors and communications devices has been significantly enhanced by computational modeling. We present a new computational method for modelling laser propagation in optically-active semiconductors within the paraxial wave approximation (PWA). Light propagation is modeled using the Streamline-upwind/Petrov-Galerkin finite element method (FEM). Material response enters through the non-linear polarization, which serves as the right-hand side of the FEM calculation. Maxwell's equations for classical light propagation within the PWA can be written solely in terms of the electric field, producing a wave equation that is a form of the advection-diffusion-reaction equations (ADREs). This allows adaptation of the computational machinery developed for solving ADREs in fluid dynamics to light-propagation modeling. The non-linear polarization is incorporated using a flexible framework to enable the use of multiple methods for carrier-carrier interactions (e.g. relaxation-time-based or Monte Carlo) to enter through the non-linear polarization, as appropriate to the material type. We demonstrate using a simple carrier-carrier model approximating the response of GaN. Supported by ARL Materials Enterprise.
NASA Technical Reports Server (NTRS)
Alpar, M. A.; Cheng, K. S.; Pines, D.
1989-01-01
The dynamics of pinned superfluid in neutron stars is determined by the thermal 'creep' of vortices. Vortex creep can respond to changes in the rotation rate of the neutron star crust and provide the observed types of dynamical relaxation following pulsar glitches. It also gives rise to energy dissipation, which determines the thermal evolution of pulsars once the initial heat content has been radiated away. The different possible regimes of vortex creep are explored, and it is shown that the nature of the dynamical response of the pinned superfluid evolves with a pulsar's age. Younger pulsars display a linear regime, where the response is linear in the initial perturbation and is a simple exponential relaxation as a function of time. A nonliner response, with a characteristic nonlinear dependence on the initial perturbation, is responsible for energy dissipation and becomes the predominant mode of response as the pulsar ages. The transition from the linear to the nonlinear regime depends sensitively on the temperature of the neutron star interior. A preliminary review of existing postglitch observations is given within this general evolutionary framework.
Quantitative verification of ab initio self-consistent laser theory.
Ge, Li; Tandy, Robert J; Stone, A D; Türeci, Hakan E
2008-10-13
We generalize and test the recent "ab initio" self-consistent (AISC) time-independent semiclassical laser theory. This self-consistent formalism generates all the stationary lasing properties in the multimode regime (frequencies, thresholds, internal and external fields, output power and emission pattern) from simple inputs: the dielectric function of the passive cavity, the atomic transition frequency, and the transverse relaxation time of the lasing transition.We find that the theory gives excellent quantitative agreement with full time-dependent simulations of the Maxwell-Bloch equations after it has been generalized to drop the slowly-varying envelope approximation. The theory is infinite order in the non-linear hole-burning interaction; the widely used third order approximation is shown to fail badly.
NASA Astrophysics Data System (ADS)
Falvo, Cyril
2018-02-01
The theory of linear and non-linear infrared response of vibrational Holstein polarons in one-dimensional lattices is presented in order to identify the spectral signatures of self-trapping phenomena. Using a canonical transformation, the optical response is computed from the small polaron point of view which is valid in the anti-adiabatic limit. Two types of phonon baths are considered: optical phonons and acoustical phonons, and simple expressions are derived for the infrared response. It is shown that for the case of optical phonons, the linear response can directly probe the polaron density of states. The model is used to interpret the experimental spectrum of crystalline acetanilide in the C=O range. For the case of acoustical phonons, it is shown that two bound states can be observed in the two-dimensional infrared spectrum at low temperature. At high temperature, analysis of the time-dependence of the two-dimensional infrared spectrum indicates that bath mediated correlations slow down spectral diffusion. The model is used to interpret the experimental linear-spectroscopy of model α-helix and β-sheet polypeptides. This work shows that the Davydov Hamiltonian cannot explain the observations in the NH stretching range.
Doi, Ryoichi
2012-09-01
Observation of leaf colour (spectral profiles) through remote sensing is an effective method of identifying the spatial distribution patterns of abnormalities in leaf colour, which enables appropriate plant management measures to be taken. However, because the brightness of remote sensing images varies with acquisition time, in the observation of leaf spectral profiles in multi-temporally acquired remote sensing images, changes in brightness must be taken into account. This study identified a simple luminosity normalization technique that enables leaf colours to be compared in remote sensing images over time. The intensity values of green and yellow (green+red) exhibited strong linear relationships with luminosity (R2 greater than 0.926) when various invariant rooftops in Bangkok or Tokyo were spectralprofiled using remote sensing images acquired at different time points. The values of the coefficient and constant or the coefficient of the formulae describing the intensity of green or yellow were comparable among the single Bangkok site and the two Tokyo sites, indicating the technique's general applicability. For single rooftops, the values of the coefficient of variation for green, yellow, and red/green were 16% or less (n=6-11), indicating an accuracy not less than those of well-established remote sensing measures such as the normalized difference vegetation index. After obtaining the above linear relationships, raw intensity values were normalized and a temporal comparison of the spectral profiles of the canopies of evergreen and deciduous tree species in Tokyo was made to highlight the changes in the canopies' spectral profiles. Future aspects of this technique are discussed herein.
Firefly Algorithm in detection of TEC seismo-ionospheric anomalies
NASA Astrophysics Data System (ADS)
Akhoondzadeh, Mehdi
2015-07-01
Anomaly detection in time series of different earthquake precursors is an essential introduction to create an early warning system with an allowable uncertainty. Since these time series are more often non linear, complex and massive, therefore the applied predictor method should be able to detect the discord patterns from a large data in a short time. This study acknowledges Firefly Algorithm (FA) as a simple and robust predictor to detect the TEC (Total Electron Content) seismo-ionospheric anomalies around the time of the some powerful earthquakes including Chile (27 February 2010), Varzeghan (11 August 2012) and Saravan (16 April 2013). Outstanding anomalies were observed 7 and 5 days before the Chile and Varzeghan earthquakes, respectively and also 3 and 8 days prior to the Saravan earthquake.
LR: Compact connectivity representation for triangle meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurung, T; Luffel, M; Lindstrom, P
2011-01-28
We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators formore » traversing a mesh, and show that LR often saves both space and traversal time over competing representations.« less
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-02-01
A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Kalman Filters for UXO Detection: Real-Time Feedback and Small Target Detection
2012-05-01
last two decades. Accomplishments reported from both hardware and software point of views have moved the re- search focus from simple laboratory tests...quality data which in turn require a good positioning of the sensors atop the UXOs. The data collection protocol is currently based on a two-stage process...Note that this results is merely an illustration of the convergence of the Kalman filter. In practise , the linear part can be directly inverted for if
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivak, David; Crooks, Gavin
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
Anytime query-tuned kernel machine classifiers via Cholesky factorization
NASA Technical Reports Server (NTRS)
DeCoste, D.
2002-01-01
We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.
Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien
2012-01-01
Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.
A Simple Numerical Procedure for the Simulation of "Lifelike" Linear-Sweep Voltammograms
NASA Astrophysics Data System (ADS)
Bozzini, Benedetto P.
2000-01-01
Practical linear-sweep voltammograms seldom resemble the theoretical ones shown in textbooks. This is because several phenomena (activation, mass transport, ohmic resistance) control the kinetics over different potential ranges scanned during the potential sweep. These effects are generally treated separately in the didactic literature, yet they have never been "assembled" in a way that allows the educational use of real experiments. This makes linear-sweep voltammetric experiments almost unusable in the teaching of physical chemistry. A simple approach to the classroom description of "lifelike" experimental results is proposed in this paper. Analytical expressions of linear sweep voltammograms are provided. The actual numerical evaluations can be carried out with a pocket calculator. Two typical examples are executed and comparison with experimental data is described. This approach to teaching electrode kinetics has proved an effective tool to provide students with an insight into the effects of electrochemical parameters and operating conditions.
NASA Astrophysics Data System (ADS)
Cole, Jonathan; Zhang, Yao; Liu, Tianqi; Liu, Chang-jun; Mohan Sankaran, R.
2017-08-01
Scale-up of non-thermal atmospheric-pressure plasma reactors for the synthesis of nanoparticles by homogeneous nucleation is challenging because the active volume is typically reduced to facilitate gas breakdown, enhance discharge stability, and limit particle size and agglomeration, but thus limits throughput. Here, we introduce a dielectric barrier discharge reactor consisting of a coaxial electrode geometry for nanoparticle production that enables a simple scale-up strategy whereby increasing the outer and inner electrode diameters, the plasma volume is increased approximately linearly, while maintaining a sufficiently small electrode gap to maintain the electric field strength. We show with two test reactors that for a given residence time, the nanoparticle production rate increases linearly with volume over a range of precursor concentrations, while having minimal effect on the shape of the particle size distribution. However, our study also reveals that increasing the total gas flow rate in a smaller volume reactor leads to an enhancement of precursor conversion and a comparable production rate to a larger volume reactor. These results suggest that scale-up requires better understanding of the influence of reactor geometry on particle growth dynamics and may not always be a simple function of reactor volume.
Protoplanetary disc `isochrones' and the evolution of discs in the M˙-Md plane
NASA Astrophysics Data System (ADS)
Lodato, Giuseppe; Scardoni, Chiara E.; Manara, Carlo F.; Testi, Leonardo
2017-12-01
In this paper, we compare simple viscous diffusion models for the disc evolution with the results of recent surveys of the properties of young protoplanetary discs. We introduce the useful concept of 'disc isochrones' in the accretion rate-disc mass plane and explore a set of Monte Carlo realization of disc initial conditions. We find that such simple viscous models can provide a remarkable agreement with the available data in the Lupus star forming region, with the key requirement that the average viscous evolutionary time-scale of the discs is comparable to the cluster age. Our models produce naturally a correlation between mass accretion rate and disc mass that is shallower than linear, contrary to previous results and in agreement with observations. We also predict that a linear correlation, with a tighter scatter, should be found for more evolved disc populations. Finally, we find that such viscous models can reproduce the observations in the Lupus region only in the assumption that the efficiency of angular momentum transport is a growing function of radius, thus putting interesting constraints on the nature of the microscopic processes that lead to disc accretion.
Valuation of financial models with non-linear state spaces
NASA Astrophysics Data System (ADS)
Webber, Nick
2001-02-01
A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.
[Developments in preparation and experimental method of solid phase microextraction fibers].
Yi, Xu; Fu, Yujie
2004-09-01
Solid phase microextraction (SPME) is a simple and effective adsorption and desorption technique, which concentrates volatile or nonvolatile compounds from liquid samples or headspace of samples. SPME is compatible with analyte separation and detection by gas chromatography, high performance liquid chromatography, and other instrumental methods. It can provide many advantages, such as wide linear scale, low solvent and sample consumption, short analytical times, low detection limits, simple apparatus, and so on. The theory of SPME is introduced, which includes equilibrium theory and non-equilibrium theory. The novel development of fiber preparation methods and relative experimental techniques are discussed. In addition to commercial fiber preparation, different newly developed fabrication techniques, such as sol-gel, electronic deposition, carbon-base adsorption, high-temperature epoxy immobilization, are presented. Effects of extraction modes, selection of fiber coating, optimization of operating conditions, method sensitivity and precision, and systematical automation, are taken into considerations in the analytical process of SPME. A simple perspective of SPME is proposed at last.
Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.
2011-01-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616
Detection and recognition of simple spatial forms
NASA Technical Reports Server (NTRS)
Watson, A. B.
1983-01-01
A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.
A Linear Theory for Inflatable Plates of Arbitrary Shape
NASA Technical Reports Server (NTRS)
McComb, Harvey G., Jr.
1961-01-01
A linear small-deflection theory is developed for the elastic behavior of inflatable plates of which Airmat is an example. Included in the theory are the effects of a small linear taper in the depth of the plate. Solutions are presented for some simple problems in the lateral deflection and vibration of constant-depth rectangular inflatable plates.
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Cosmological N -body simulations with generic hot dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk
2017-10-01
We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.
Cosmological N-body simulations with generic hot dark matter
NASA Astrophysics Data System (ADS)
Brandbyge, Jacob; Hannestad, Steen
2017-10-01
We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.
Linear diffusion into a Faraday cage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warne, Larry Kevin; Lin, Yau Tang; Merewether, Kimball O.
2011-11-01
Linear lightning diffusion into a Faraday cage is studied. An early-time integral valid for large ratios of enclosure size to enclosure thickness and small relative permeability ({mu}/{mu}{sub 0} {le} 10) is used for this study. Existing solutions for nearby lightning impulse responses of electrically thick-wall enclosures are refined and extended to calculate the nearby lightning magnetic field (H) and time-derivative magnetic field (HDOT) inside enclosures of varying thickness caused by a decaying exponential excitation. For a direct strike scenario, the early-time integral for a worst-case line source outside the enclosure caused by an impulse is simplified and numerically integrated tomore » give the interior H and HDOT at the location closest to the source as well as a function of distance from the source. H and HDOT enclosure response functions for decaying exponentials are considered for an enclosure wall of any thickness. Simple formulas are derived to provide a description of enclosure interior H and HDOT as well. Direct strike voltage and current bounds for a single-turn optimally-coupled loop for all three waveforms are also given.« less
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1975-01-01
STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.
Spectrometer calibration for spectroscopic Fourier domain optical coherence tomography
Szkulmowski, Maciej; Tamborski, Szymon; Wojtkowski, Maciej
2016-01-01
We propose a simple and robust procedure for Fourier domain optical coherence tomography (FdOCT) that allows to linearize the detected FdOCT spectra to wavenumber domain and, at the same time, to determine the wavelength of light for each point of detected spectrum. We show that in this approach it is possible to use any measurable physical quantity that has linear dependency on wavenumber and can be extracted from spectral fringes. The actual values of the measured quantity have no importance for the algorithm and do not need to be known at any stage of the procedure. As example we calibrate a spectral OCT spectrometer using Doppler frequency. The technique of spectral calibration can be in principle adapted to of all kind of Fourier domain OCT devices. PMID:28018723
Choosing order of operations to accelerate strip structure analysis in parameter range
NASA Astrophysics Data System (ADS)
Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.
2018-05-01
The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.
State of charge estimation in Ni-MH rechargeable batteries
NASA Astrophysics Data System (ADS)
Milocco, R. H.; Castro, B. E.
In this work we estimate the state of charge (SOC) of Ni-MH rechargeable batteries using the Kalman filter based on a simplified electrochemical model. First, we derive the complete electrochemical model of the battery which includes diffusional processes and kinetic reactions in both Ni and MH electrodes. The full model is further reduced in a cascade of two parts, a linear time invariant dynamical sub-model followed by a static nonlinearity. Both parts are identified using the current and potential measured at the terminals of the battery with a simple 1-D minimization procedure. The inverse of the static nonlinearity together with a Kalman filter provide the SOC estimation as a linear estimation problem. Experimental results with commercial batteries are provided to illustrate the estimation procedure and to show the performance.
NASA Technical Reports Server (NTRS)
Desoer, C. A.; Kabuli, M. G.
1989-01-01
The authors consider a linear (not necessarily time-invariant) stable unity-feedback system, where the plant and the compensator have normalized right-coprime factorizations. They study two cases of nonlinear plant perturbations (additive and feedback), with four subcases resulting from: (1) allowing exogenous input to Delta P or not; 2) allowing the observation of the output of Delta P or not. The plant perturbation Delta P is not required to be stable. Using the factorization approach, the authors obtain necessary and sufficient conditions for all cases in terms of two pairs of nonlinear pseudostate maps. Simple physical considerations explain the form of these necessary and sufficient conditions. Finally, the authors obtain the characterization of all perturbations Delta P for which the perturbed system remains stable.
Eigenvalue assignment by minimal state-feedback gain in LTI multivariable systems
NASA Astrophysics Data System (ADS)
Ataei, Mohammad; Enshaee, Ali
2011-12-01
In this article, an improved method for eigenvalue assignment via state feedback in the linear time-invariant multivariable systems is proposed. This method is based on elementary similarity operations, and involves mainly utilisation of vector companion forms, and thus is very simple and easy to implement on a digital computer. In addition to the controllable systems, the proposed method can be applied for the stabilisable ones and also systems with linearly dependent inputs. Moreover, two types of state-feedback gain matrices can be achieved by this method: (1) the numerical one, which is unique, and (2) the parametric one, in which its parameters are determined in order to achieve a gain matrix with minimum Frobenius norm. The numerical examples are presented to demonstrate the advantages of the proposed method.
NASA Astrophysics Data System (ADS)
Rose, D. V.; Miller, C. L.; Welch, D. R.; Clark, R. E.; Madrid, E. A.; Mostrom, C. B.; Stygar, W. A.; Lechien, K. R.; Mazarakis, M. A.; Langston, W. L.; Porter, J. L.; Woodworth, J. R.
2010-09-01
A 3D fully electromagnetic (EM) model of the principal pulsed-power components of a high-current linear transformer driver (LTD) has been developed. LTD systems are a relatively new modular and compact pulsed-power technology based on high-energy density capacitors and low-inductance switches located within a linear-induction cavity. We model 1-MA, 100-kV, 100-ns rise-time LTD cavities [A. A. Kim , Phys. Rev. ST Accel. Beams 12, 050402 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.050402] which can be used to drive z-pinch and material dynamics experiments. The model simulates the generation and propagation of electromagnetic power from individual capacitors and triggered gas switches to a radially symmetric output line. Multiple cavities, combined to provide voltage addition, drive a water-filled coaxial transmission line. A 3D fully EM model of a single 1-MA 100-kV LTD cavity driving a simple resistive load is presented and compared to electrical measurements. A new model of the current loss through the ferromagnetic cores is developed for use both in circuit representations of an LTD cavity and in the 3D EM simulations. Good agreement between the measured core current, a simple circuit model, and the 3D simulation model is obtained. A 3D EM model of an idealized ten-cavity LTD accelerator is also developed. The model results demonstrate efficient voltage addition when driving a matched impedance load, in good agreement with an idealized circuit model.
NASA Astrophysics Data System (ADS)
Butler, S. L.
2010-09-01
A porosity localizing instability occurs in compacting porous media that are subjected to shear if the viscosity of the solid matrix decreases with porosity ( Stevenson, 1989). This instability may have significant consequences for melt transport in regions of partial melt in the mantle and may significantly modify the effective viscosity of the asthenosphere ( Kohlstedt and Holtzman, 2009). Most analyses of this instability have been carried out assuming an imposed simple shear flow (e.g., Spiegelman, 2003; Katz et al., 2006; Butler, 2009). Pure shear can be realized in laboratory experiments and studying the instability in a pure shear flow allows us to test the generality of some of the results derived for simple shear and the flow pattern for pure shear more easily separates the effects of deformation from rotation. Pure shear flows may approximate flows near the tops of mantle plumes near earth's surface and in magma chambers. In this study, we present linear theory and nonlinear numerical model results for a porosity and strain-rate weakening compacting porous layer subjected to pure shear and we investigate the effects of buoyancy-induced oscillations. The linear theory and numerical model will be shown to be in excellent agreement. We will show that melt bands grow at the same angles to the direction of maximum compression as in simple shear and that buoyancy-induced oscillations do not significantly inhibit the porosity localizing instability. In a pure shear flow, bands parallel to the direction of maximum compression increase exponentially in wavelength with time. However, buoyancy-induced oscillations are shown to inhibit this increase in wavelength. In a simple shear flow, bands increase in wavelength when they are in the orientation for growth of the porosity localizing instability. Because the amplitude spectrum is always dominated by bands in this orientation, band wavelengths increase with time throughout simple shear simulations until the wavelength becomes similar to one compaction length. Once the wavelength becomes similar to one compaction length, the growth of the amplitude of the band slows and shorter wavelength bands that are increasing in amplitude at a greater rate take over. This may provide a mechanism to explain the experimental observation that band spacing is controlled by the compaction length ( Kohlstedt and Holtzman, 2009).
3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.
1985-01-01
The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Oxygen quenching in a LAB based liquid scintillator and the nitrogen bubbling model
NASA Astrophysics Data System (ADS)
Xiao, Hua-Lin; Deng, Jing-Shan; Wang, Nai-Yan
2010-05-01
The oxygen quenching effect in a Linear Alkl Benzene (LAB) based liquid scintillator (LAB as the solvent, 3 g/L 2, 5 diphe-nyloxazole (PPO) as the fluor and 15 mg/L p-bis-(o-methylstyryl)-benzene (bis-MSB) as the λ-shifter) is studied by measuring the light yield as a function of the nitrogen bubbling time. It is shown that the light yield of the fully purged liquid scintillator is increased by 11% at room temperature and the room atmospheric pressure. A simple nitrogen bubbling model is proposed to describe the relationship between the relative light yield (oxygen quenching factor) and the bubbling time.
Zhang, Jihua; He, Yizhuo; Lam, Billy; Guo, Chunlei
2017-08-21
Femtosecond-laser surface structuring on metals is investigated in real time by both fundamental and second harmonic generation (SHG) signals. The onset of surface modification and its progress can be monitored by both the fundamental and SHG probes. However, the dynamics of femtosecond-laser-induced periodic surface structures (FLIPSSs) formation can only be revealed by SHG but not fundamental because of the higher sensitivity of SHG to structural geometry on metal. Our technique provides a simple and effective way to monitor the surface modification and FLIPSS formation thresholds and allows us to obtain the optimal FLIPSS for SHG enhancement.
Numerical solutions to the time-dependent Bloch equations revisited.
Murase, Kenya; Tanki, Nobuyoshi
2011-01-01
The purpose of this study was to demonstrate a simple and fast method for solving the time-dependent Bloch equations. First, the time-dependent Bloch equations were reduced to a homogeneous linear differential equation, and then a simple equation was derived to solve it using a matrix operation. The validity of this method was investigated by comparing with the analytical solutions in the case of constant radiofrequency irradiation. There was a good agreement between them, indicating the validity of this method. As a further example, this method was applied to the time-dependent Bloch equations in the two-pool exchange model for chemical exchange saturation transfer (CEST) or amide proton transfer (APT) magnetic resonance imaging (MRI), and the Z-spectra and asymmetry spectra were calculated from their solutions. They were also calculated using the fourth/fifth-order Runge-Kutta-Fehlberg (RKF) method for comparison. There was also a good agreement between them, and this method was much faster than the RKF method. In conclusion, this method will be useful for analyzing the complex CEST or APT contrast mechanism and/or investigating the optimal conditions for CEST or APT MRI. Copyright © 2011 Elsevier Inc. All rights reserved.
Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide
2014-06-01
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.
NASA Astrophysics Data System (ADS)
Wilson, Chris; Hughes, Chris W.; Blundell, Jeffrey R.
2015-01-01
use ensemble runs of a three layer, quasi-geostrophic idealized Southern Ocean model to explore the roles of forced and intrinsic variability in response to a linear increase of wind stress imposed over a 30 year period. We find no increase of eastward circumpolar volume transport in response to the increased wind stress. A large part of the resulting time series can be explained by a response in which the eddy kinetic energy is linearly proportional to the wind stress with a possible time lag, but no statistically significant lag is found. However, this simple relationship is not the whole story: several intrinsic time scales also influence the response. We find an e-folding time scale for growth of small perturbations of 1-2 weeks. The energy budget for intrinsic variability at periods shorter than a year is dominated by exchange between kinetic and potential energy. At longer time scales, we find an intrinsic mode with period in the region of 15 years, which is dominated by changes in potential energy and frictional dissipation in a manner consistent with that seen by Hogg and Blundell (2006). A similar mode influences the response to changing wind stress. This influence, robust to perturbations, is different from the supposed linear relationship between wind stress and eddy kinetic energy, and persists for 5-10 years in this model, suggestive of a forced oscillatory mode with period of around 15 years. If present in the real ocean, such a mode would imply a degree of predictability of Southern Ocean dynamics on multiyear time scales.
A review on prognostic techniques for non-stationary and non-linear rotating systems
NASA Astrophysics Data System (ADS)
Kan, Man Shan; Tan, Andy C. C.; Mathew, Joseph
2015-10-01
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
A recursive Bayesian updating model of haptic stiffness perception.
Wu, Bing; Klatzky, Roberta L
2018-06-01
Stiffness of many materials follows Hooke's Law, but the mechanism underlying the haptic perception of stiffness is not as simple as it seems in the physical definition. The present experiments support a model by which stiffness perception is adaptively updated during dynamic interaction. Participants actively explored virtual springs and estimated their stiffness relative to a reference. The stimuli were simulations of linear springs or nonlinear springs created by modulating a linear counterpart with low-amplitude, half-cycle (Experiment 1) or full-cycle (Experiment 2) sinusoidal force. Experiment 1 showed that subjective stiffness increased (decreased) as a linear spring was positively (negatively) modulated by a half-sinewave force. In Experiment 2, an opposite pattern was observed for full-sinewave modulations. Modeling showed that the results were best described by an adaptive process that sequentially and recursively updated an estimate of stiffness using the force and displacement information sampled over trajectory and time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Mookerjee, P.; Molusis, J. A.; Bar-Shalom, Y.
1985-01-01
An investigation of the properties important for the design of stochastic adaptive controllers for the higher harmonic control of helicopter vibration is presented. Three different model types are considered for the transfer relationship between the helicopter higher harmonic control input and the vibration output: (1) nonlinear; (2) linear with slow time varying coefficients; and (3) linear with constant coefficients. The stochastic controller formulations and solutions are presented for a dual, cautious, and deterministic controller for both linear and nonlinear transfer models. Extensive simulations are performed with the various models and controllers. It is shown that the cautious adaptive controller can sometimes result in unacceptable vibration control. A new second order dual controller is developed which is shown to modify the cautious adaptive controller by adding numerator and denominator correction terms to the cautious control algorithm. The new dual controller is simulated on a simple single-control vibration example and is found to achieve excellent vibration reduction and significantly improves upon the cautious controller.
Experimental Evaluation of the Free Piston Engine - Linear Alternator (FPLA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leick, Michael T.; Moses, Ronald W.
2015-03-01
This report describes the experimental evaluation of a prototype free piston engine - linear alternator (FPLA) system developed at Sandia National Laboratories. The opposed piston design wa developed to investigate its potential for use in hybrid electric vehicles (HEVs). The system is mechanically simple with two - stroke uniflow scavenging for gas exchange and timed port fuel injection for fuel delivery, i.e. no complex valving. Electrical power is extracted from piston motion through linear alternators wh ich also provide a means for passive piston synchronization through electromagnetic coupling. In an HEV application, this electrical power would be used to chargemore » the batteries. The engine - alternator system was designed, assembled and operated over a 2 - year period at Sandia National Laboratories in Livermore, CA. This report primarily contains a description of the as - built system, modifications to the system to enable better performance, and experimental results from start - up, motoring, and hydrogen combus tion tests.« less
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; Ravi, Koustuban; Fallahi, Arya; Moriena, Gustavo; Dwayne Miller, R. J.; Kärtner, Franz X.
2015-01-01
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeV m−1 gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/proton accelerators with simple accelerating structures, high repetition rates and significant charge per bunch. These ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams. PMID:26439410
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; ...
2015-10-06
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeVm -1 gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/protonmore » accelerators with simple accelerating structures, high repetition rates and significant charge per bunch. As a result, these ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams.« less
Development of orientation tuning in simple cells of primary visual cortex
Moore, Bartlett D.
2012-01-01
Orientation selectivity and its development are basic features of visual cortex. The original model of orientation selectivity proposes that elongated simple cell receptive fields are constructed from convergent input of an array of lateral geniculate nucleus neurons. However, orientation selectivity of simple cells in the visual cortex is generally greater than the linear contributions based on projections from spatial receptive field profiles. This implies that additional selectivity may arise from intracortical mechanisms. The hierarchical processing idea implies mainly linear connections, whereas cortical contributions are generally considered to be nonlinear. We have explored development of orientation selectivity in visual cortex with a focus on linear and nonlinear factors in a population of anesthetized 4-wk postnatal kittens and adult cats. Linear contributions are estimated from receptive field maps by which orientation tuning curves are generated and bandwidth is quantified. Nonlinear components are estimated as the magnitude of the power function relationship between responses measured from drifting sinusoidal gratings and those predicted from the spatial receptive field. Measured bandwidths for kittens are slightly larger than those in adults, whereas predicted bandwidths are substantially broader. These results suggest that relatively strong nonlinearities in early postnatal stages are substantially involved in the development of orientation tuning in visual cortex. PMID:22323631
μ-PADs for detection of chemical warfare agents.
Pardasani, Deepak; Tak, Vijay; Purohit, Ajay K; Dubey, D K
2012-12-07
Conventional methods of detection of chemical warfare agents (CWAs) based on chromogenic reactions are time and solvent intensive. The development of cost, time and solvent effective microfluidic paper based analytical devices (μ-PADs) for the detection of nerve and vesicant agents is described. The detection of analytes was based upon their reactions with rhodamine hydroxamate and para-nitrobenzyl pyridine, producing red and blue colours respectively. Reactions were optimized on the μ-PADs to produce the limits of detection (LODs) as low as 100 μM for sulfur mustard in aqueous samples. Results were quantified with the help of a simple desktop scanner and Photoshop software. Sarin achieved a linear response in the two concentration ranges of 20-100 mM and 100-500 mM, whereas the response of sulfur mustard was found to be linear in the concentration range of 10-75 mM. Results were precise enough to establish the μ-PADs as a valuable tool for security personnel fighting against chemical terrorism.
Kinetic Features in the Ion Flux Spectrum
NASA Astrophysics Data System (ADS)
Vafin, S.; Riazantseva, M.; Yoon, P. H.
2017-11-01
An interesting feature of solar wind fluctuations is the occasional presence of a well-pronounced peak near the spectral knee. These peaks are well investigated in the context of magnetic field fluctuations in the magnetosheath and they are typically related to kinetic plasma instabilities. Recently, similar peaks were observed in the spectrum of ion flux fluctuations of the solar wind and magnetosheath. In this paper, we propose a simple analytical model to describe such peaks in the ion flux spectrum based on the linear theory of plasma fluctuations. We compare our predictions with a sample observation in the solar wind. For the given observation, the peak requires ˜10 minutes to grow up to the observed level that agrees with the quasi-linear relaxation time. Moreover, our model well reproduces the form of the measured peak in the ion flux spectrum. The observed lifetime of the peak is about 50 minutes, which is relatively close to the nonlinear Landau damping time of 30-40 minutes. Overall, our model proposes a plausible scenario explaining the observation.
Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Liu, Zejin
2015-10-20
Stable information of a sky light polarization pattern can be used for navigation with various advantages such as better performance of anti-interference, no "error cumulative effect," and so on. But the existing method of sky light polarization measurement is weak in real-time performance or with a complex system. Inspired by the navigational capability of a Cataglyphis with its compound eyes, we introduce a new approach to acquire the all-sky image under different polarization directions with one camera and without a rotating polarizer, so as to detect the polarization pattern across the full sky in a single snapshot. Our system is based on a handheld light field camera with a wide-angle lens and a triplet linear polarizer placed over its aperture stop. Experimental results agree with the theoretical predictions. Not only real-time detection but simple and costless architecture demonstrates the superiority of the approach proposed in this paper.
Causal Inference and Explaining Away in a Spiking Network
Moreno-Bote, Rubén; Drugowitsch, Jan
2015-01-01
While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification. PMID:26621426
Causal Inference and Explaining Away in a Spiking Network.
Moreno-Bote, Rubén; Drugowitsch, Jan
2015-12-01
While the brain uses spiking neurons for communication, theoretical research on brain computations has mostly focused on non-spiking networks. The nature of spike-based algorithms that achieve complex computations, such as object probabilistic inference, is largely unknown. Here we demonstrate that a family of high-dimensional quadratic optimization problems with non-negativity constraints can be solved exactly and efficiently by a network of spiking neurons. The network naturally imposes the non-negativity of causal contributions that is fundamental to causal inference, and uses simple operations, such as linear synapses with realistic time constants, and neural spike generation and reset non-linearities. The network infers the set of most likely causes from an observation using explaining away, which is dynamically implemented by spike-based, tuned inhibition. The algorithm performs remarkably well even when the network intrinsically generates variable spike trains, the timing of spikes is scrambled by external sources of noise, or the network is mistuned. This type of network might underlie tasks such as odor identification and classification.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
Linear Legendrian curves in T(3)
NASA Astrophysics Data System (ADS)
Ghiggini, Paolo
2006-05-01
Using convex surfaces and Kanda's classification theorem, we classify Legendrian isotopy classes of Legendrian linear curves in all tight contact structures on T(3) . Some of the knot types considered in this paper provide new examples of non transversally simple knot types.
Zhang, Yifeng; Angelidaki, Irini
2012-01-01
A submersible microbial fuel cell (SBMFC) was developed as a biosensor for in situ and real time monitoring of dissolved oxygen (DO) in environmental waters. Domestic wastewater was utilized as a sole fuel for powering the sensor. The sensor performance was firstly examined with tap water at varying DO levels. With an external resistance of 1000Ω, the current density produced by the sensor (5.6 ± 0.5-462.2 ± 0.5 mA/m(2)) increased linearly with DO level up to 8.8 ± 0.3mg/L (regression coefficient, R(2)=0.9912), while the maximum response time for each measurement was less than 4 min. The current density showed different response to DO levels when different external resistances were applied, but a linear relationship was always observed. Investigation of the sensor performance at different substrate concentrations indicates that the organic matter contained in the domestic wastewater was sufficient to power the sensing activities. The sensor ability was further explored under different environmental conditions (e.g. pH, temperature, conductivity, and alternative electron acceptor), and the results indicated that a calibration would be required before field application. Lastly, the sensor was tested with different environmental waters and the results showed no significant difference (p>0.05) with that measured by DO meter. The simple, compact SBMFC sensor showed promising potential for direct, inexpensive and rapid DO monitoring in various environmental waters. Copyright © 2012 Elsevier B.V. All rights reserved.
Siva Selva Kumar, M; Ramanathan, M
2016-02-01
A simple and sensitive ultra-performance liquid chromatography (UPLC) method has been developed and validated for simultaneous estimation of olanzapine (OLZ), risperidone (RIS) and 9-hydroxyrisperidone (9-OHRIS) in human plasma in vitro. The sample preparation was performed by simple liquid-liquid extraction technique. The analytes were chromatographed on a Waters Acquity H class UPLC system using isocratic mobile phase conditions at a flow rate of 0.3 mL/min and Acquity UPLC BEH shield RP18 column maintained at 40°C. Quantification was performed on a photodiode array detector set at 277 nm and clozapine was used as internal standard (IS). OLZ, RIS, 9-OHRIS and IS retention times were found to be 0.9, 1.4, .1.8 and 3.1 min, respectively, and the total run time was 4 min. The method was validated for selectivity, specificity, recovery, linearity, accuracy, precision and sample stability. The calibration curve was linear over the concentration range 1-100 ng/mL for OLZ, RIS and 9-OHRIS. Intra- and inter-day precisions for OLZ, RIS and 9-OHRIS were found to be good with the coefficient of variation <6.96%, and the accuracy ranging from 97.55 to 105.41%, in human plasma. The validated UPLC method was successfully applied to the pharmacokinetic study of RIS and 9-OHRIS in human plasma. Copyright © 2015 John Wiley & Sons, Ltd.
Ramanujam, N; Sivaselvakumar, M; Ramalingam, S
2017-11-01
A simple, sensitive and reproducible ultra-performance liquid chromatography (UPLC) method has been developed and validated for simultaneous estimation of polychlorinated biphenyl (PCB) 77 and PCB 180 in mouse plasma. The sample preparation was performed by simple liquid-liquid extraction technique. The analytes were chromatographed on a Waters Acquity H class UPLC system using isocratic mobile phase conditions at a flow rate of 0.3 mL/min and Acquity UPLC BEH shield RP 18 column maintained at 35°C. Quantification was performed on a photodiode array detector set at 215 nm and PCB 101 was used as internal standard (IS). PCB 77, PCB 180, and IS retention times were 2.6, 4.7 and 2.8 min, respectively, and the total run time was 6 min. The method was validated for specificity, selectivity, recovery, linearity, accuracy, precision and sample stability. The calibration curve was linear over the concentration range 10-3000 ng/mL for PCB 77 and PCB 180. Intra- and inter-day precisions for PCBs 77 and 180 were found to be good with CV <4.64%, and the accuracy ranged from 98.90 to 102.33% in mouse plasma. The validated UPLC method was successfully applied to the pharmacokinetic study of PCBs 77 and 180 in mouse plasma. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lee, Kyung Min; Tondiglia, Vincent P.; Bunning, Timothy J.; White, Timothy J.
2017-02-01
Recently, we reported direct current (DC) field controllable electro-optic (EO) responses of negative dielectric anisotropy polymer stabilized cholesteric liquid crystals (PSCLCs). A potential mechanism is: Ions in the liquid crystal mixtures are trapped in/on the polymer network during the fast photopolymerization process, and the movement of ions by the application of the DC field distorts polymer network toward the negative electrode, inducing pitch variation through the cell thickness, i.e., pitch compression on the negative electrode side and pitch expansion on positive electrode side. As the DC voltage is directly applied to a target voltage, charged polymer network is deformed and the reflection band is tuned. Interestingly, the polymer network deforms further (red shift of reflection band) with time when constantly applied DC voltage, illustrating DC field induced time dependent deformation of polymer network (creep-like behavior). This time dependent reflection band changes in PSCLCs are investigated by varying the several factors, such as type and concentration of photoinitiators, liquid crystal monomer content, and curing condition (UV intensity and curing time). In addition, simple linear viscoelastic spring-dashpot models, such as 2-parameter Kelvin and 3-parameter linear models, are used to investigate the time-dependent viscoelastic behaviors of polymer networks in PSCLC.
NASA Astrophysics Data System (ADS)
Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui
2018-06-01
This paper mainly studies the finite-time stability and synchronization problems of memristor-based fractional-order fuzzy cellular neural network (MFFCNN). Firstly, we discuss the existence and uniqueness of the Filippov solution of the MFFCNN according to the Banach fixed point theorem and give a sufficient condition for the existence and uniqueness of the solution. Secondly, a sufficient condition to ensure the finite-time stability of the MFFCNN is obtained based on the definition of finite-time stability of the MFFCNN and Gronwall-Bellman inequality. Thirdly, by designing a simple linear feedback controller, the finite-time synchronization criterion for drive-response MFFCNN systems is derived according to the definition of finite-time synchronization. These sufficient conditions are easy to verify. Finally, two examples are given to show the effectiveness of the proposed results.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
A neural network approach to job-shop scheduling.
Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E
1991-01-01
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.
NASA Astrophysics Data System (ADS)
Siahaan, P.; Suryani, A.; Kaniawati, I.; Suhendi, E.; Samsudin, A.
2017-02-01
The purpose of this research is to identify the development of students’ science process skills (SPS) on linear motion concept by utilizing simple computer simulation. In order to simplify the learning process, the concept is able to be divided into three sub-concepts: 1) the definition of motion, 2) the uniform linear motion and 3) the uniformly accelerated motion. This research was administered via pre-experimental method with one group pretest-posttest design. The respondents which were involved in this research were 23 students of seventh grade in one of junior high schools in Bandung City. The improving process of students’ science process skill is examined based on normalized gain analysis from pretest and posttest scores for all sub-concepts. The result of this research shows that students’ science process skills are dramatically improved by 47% (moderate) on observation skill; 43% (moderate) on summarizing skill, 70% (high) on prediction skill, 44% (moderate) on communication skill and 49% (moderate) on classification skill. These results clarify that the utilizing simple computer simulations in physics learning is be able to improve overall science skills at moderate level.
Direct localization of poles of a meromorphic function from measurements on an incomplete boundary
NASA Astrophysics Data System (ADS)
Nara, Takaaki; Ando, Shigeru
2010-01-01
This paper proposes an algebraic method to reconstruct the positions of multiple poles in a meromorphic function field from measurements on an arbitrary simple arc in it. A novel issue is the exactness of the algorithm depending on whether the arc is open or closed, and whether it encloses or does not enclose the poles. We first obtain a differential equation that can equivalently determine the meromorphic function field. From it, we derive linear equations that relate the elementary symmetric polynomials of the pole positions to weighted integrals of the field along the simple arc and end-point terms of the arc when it is an open one. Eliminating the end-point terms based on an appropriate choice of weighting functions and a combination of the linear equations, we obtain a simple system of linear equations for solving the elementary symmetric polynomials. We also show that our algorithm can be applied to a 2D electric impedance tomography problem. The effects of the proximity of the poles, the number of measurements and noise on the localization accuracy are numerically examined.
He, ZeFang
2014-01-01
An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement. PMID:25614879
NASA Technical Reports Server (NTRS)
Cheyney, H., III; Arking, A.
1976-01-01
The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.
NASA Astrophysics Data System (ADS)
Kong, Xiangxi; Zhang, Xueliang; Chen, Xiaozhe; Wen, Bangchun; Wang, Bo
2016-05-01
In this paper, phase and speed synchronization control of four eccentric rotors (ERs) driven by induction motors in a linear vibratory feeder with unknown time-varying load torques is studied. Firstly, the electromechanical coupling model of the linear vibratory feeder is established by associating induction motor's model with the dynamic model of the system, which is a typical under actuated model. According to the characteristics of the linear vibratory feeder, the complex control problem of the under actuated electromechanical coupling model converts to phase and speed synchronization control of four ERs. In order to keep the four ERs operating synchronously with zero phase differences, phase and speed synchronization controllers are designed by employing adaptive sliding mode control (ASMC) algorithm via a modified master-slave structure. The stability of the controllers is proved by Lyapunov stability theorem. The proposed controllers are verified by simulation via Matlab/Simulink program and compared with the conventional sliding mode control (SMC) algorithm. The results show the proposed controllers can reject the time-varying load torques effectively and four ERs can operate synchronously with zero phase differences. Moreover, the control performance is better than the conventional SMC algorithm and the chattering phenomenon is attenuated. Furthermore, the effects of reference speed and parametric perturbations are discussed to show the strong robustness of the proposed controllers. Finally, experiments on a simple vibratory test bench are operated by using the proposed controllers and without control, respectively, to validate the effectiveness of the proposed controllers further.
NASA Astrophysics Data System (ADS)
Kuroda, Daniel; Fufler, Kristen
Lithium-ion batteries have become ubiquitous to the portable energy storage industry, but efficiency issues still remain. Currently, most technological and scientific efforts are focused on the electrodes with little attention on the electrolyte. For example, simple fundamental questions about the lithium ion solvation shell composition in commercially used electrolytes have not been answered. Using a combination of linear and non-linear IR spectroscopies and theoretical calculations, we have carried out a thorough investigation of the solvation structure and dynamics of the lithium ion in various linear and cyclic carbonates at common battery electrolyte concentrations. Our studies show that carbonates coordinate the lithium ion tetrahedrally. They also reveal that linear and cyclic carbonates have contrasting dynamics in which cyclic carbonates present the most ordered structure. Finally, our experiments demonstrate that simple structural modifications in the linear carbonates impact significantly the microscopic interactions of the system. The stark differences in the solvation structure and dynamics among different carbonates reveal previously unknown details about the molecular level picture of these systems.
Nishiura, Hiroshi
2011-02-16
Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Hyekyun
Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation ofmore » the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior projections over more than 60° appear to be necessary for reliable estimations. The mean 3D RMSE during beam delivery after the simple linear model had established with a prior 90° projection data was 0.42 mm for VMAT and 0.45 mm for IMRT. Conclusions: The proposed method does not require any internal/external correlation or statistical modeling to estimate the target trajectory and can be used for both retrospective image-guided radiotherapy with CBCT projection images and real-time target position monitoring for respiratory gating or tracking.« less
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fazio, A.; Henry, B.; Hood, D.
1966-01-01
Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.
Design of a two-level power system linear state estimator
NASA Astrophysics Data System (ADS)
Yang, Tao
The availability of synchro-phasor data has raised the possibility of a linear state estimator if the inputs are only complex currents and voltages and if there are enough such measurements to meet observability and redundancy requirements. Moreover, the new digital substations can perform some of the computation at the substation itself resulting in a more accurate two-level state estimator. The objective of this research is to develop a two-level linear state estimator processing synchro-phasor data and estimating the states at both the substation level and the control center level. Both the mathematical algorithms that are different from those in the present state estimation procedure and the layered architecture of databases, communications and application programs that are required to support this two-level linear state estimator are described in this dissertation. Besides, as the availability of phasor measurements at substations will increase gradually, this research also describes how the state estimator can be enhanced to handle both the traditional state estimator and the proposed linear state estimator simultaneously. This provides a way to immediately utilize the benefits in those parts of the system where such phasor measurements become available and provides a pathway to transition to the smart grid of the future. The design procedure of the two-level state estimator is applied to two study systems. The first study system is the IEEE-14 bus system. The second one is the 179 bus Western Electricity Coordinating Council (WECC) system. The static database for the substations is constructed from the power flow data of these systems and the real-time measurement database is produced by a power system dynamic simulating tool (TSAT). Time-skew problems that may be caused by communication delays are also considered and simulated. We used the Network Simulator (NS) tool to simulate a simple communication system and analyse its time delay performance. These time delays were too small to affect the results especially since the measurement data is time-stamped and the state estimator for these small systems could be run with subseconf frequency. Keywords: State Estimation, Synchro-Phasor Measurement, Distributed System, Energy Control Center, Substation, Time-skew
Viscous bursting of suspended films
NASA Astrophysics Data System (ADS)
Debrégeas, G.; Martin, P.; Brochard-Wyart, F.
1995-11-01
Soap films break up by an inertial process. We present here the first observations on freely suspended films of long-chain polymers, where viscous effects are dominant and no surfactant is present. A hole is nucleated at time 0 and grows up to a radius R(t) at time t. A surprising feature is that the liquid from the hole is not collected into a rim (as it is in soap films): The liquid spreads out without any significant change of the film thickness. The radius R(t) grows exponentially with time, R~exp(t/τ) [while in soap films R(t) is linear]. The rise time τ~ηe/2γ where η is viscosity, e is thickness (in the micron range), and γ is surface tension. A simple model is developed to explain this growth law.
NASA Astrophysics Data System (ADS)
Chen, Chun-Chi; Hwang, Chorng-Sii; Lin, You-Ting; Liu, Keng-Chih
2015-12-01
This paper presents an all-digital CMOS pulse-shrinking mechanism suitable for time-to-digital converters (TDCs). A simple MOS capacitor is used as a pulse-shrinking cell to perform time attenuation for time resolving. Compared with a previous pulse-shrinking mechanism, the proposed mechanism provides an appreciably improved temporal resolution with high linearity. Furthermore, the use of a binary-weighted pulse-shrinking unit with scaled MOS capacitors is proposed for achieving a programmable resolution. A TDC involving the proposed mechanism was fabricated using a TSMC (Taiwan Semiconductor Manufacturing Company) 0.18-μm CMOS process, and it has a small area of nearly 0.02 mm2 and an integral nonlinearity error of ±0.8 LSB for a resolution of 24 ps.
NASA Astrophysics Data System (ADS)
Takagi, Yoshihiro; Yamada, Yoshifumi; Ishikawa, Kiyoshi; Shimizu, Seiji; Sakabe, Shuji
2005-09-01
A simple method for single-shot sub-picosecond optical pulse diagnostics has been demonstrated by imaging the time evolution of the optical mixing onto the beam cross section of the sum-frequency wave when the interrogating pulse passes over the tested pulse in the mixing crystal as a result of the combined effect of group-velocity difference and walk-off beam propagation. A high linearity of the time-to-space projection is deduced from the process solely dependent upon the spatial uniformity of the refractive indices. A snap profile of the accidental coincidence between asynchronous pulses from separate mode-locked lasers has been detected, which demonstrates the single-shot ability.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI
NASA Astrophysics Data System (ADS)
Nunes, Daniel; Cruz, Tomás L.; Jespersen, Sune N.; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI.
Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.
Lahera, Guillermo; Ruiz, Alicia; Brañas, Antía; Vicens, María; Orozco, Arantxa
Previous studies have linked processing speed with social cognition and functioning of patients with schizophrenia. A discriminant analysis is needed to determine the different components of this neuropsychological construct. This paper analyzes the impact of processing speed, reaction time and sustained attention on social functioning. 98 outpatients between 18 and 65 with DSM-5 diagnosis of schizophrenia, with a period of 3 months of clinical stability, were recruited. Sociodemographic and clinical data were collected, and the following variables were measured: processing speed (Trail Making Test [TMT], symbol coding [BACS], verbal fluency), simple and elective reaction time, sustained attention, recognition of facial emotions and global functioning. Processing speed (measured only through the BACS), sustained attention (CPT) and elective reaction time (but not simple) were associated with functioning. Recognizing facial emotions (FEIT) correlated significantly with scores on measures of processing speed (BACS, Animals, TMT), sustained attention (CPT) and reaction time. The linear regression model showed a significant relationship between functioning, emotion recognition (P=.015) and processing speed (P=.029). A deficit in processing speed and facial emotion recognition are associated with worse global functioning in patients with schizophrenia. Copyright © 2017 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.
Nikitas, P; Pappa-Louisi, A
2005-09-01
The original work carried out by Freiling and Drake in gradient liquid chromatography is rewritten in the current language of reversed-phase liquid chromatography. This allows for the rigorous derivation of the fundamental equation for gradient elution and the development of two alternative expressions of this equation, one of which is free from the constraint that the holdup time must be constant. In addition, the above derivation results in a very simple numerical solution of the various equations of gradient elution under any gradient profile. The theory was tested using eight catechol-related solutes in mobile phases modified with methanol, acetonitrile, or 2-propanol. It was found to be a satisfactory prediction of solute gradient retention behavior even if we used a simple linear description for the isocratic elution of these solutes.
Well-posedness, linear perturbations, and mass conservation for the axisymmetric Einstein equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dain, Sergio; Ortiz, Omar E.; Facultad de Matematica, Astronomia y Fisica, FaMAF, Universidad Nacional de Cordoba, Instituto de Fisica Enrique Gaviola, IFEG, CONICET, Ciudad Universitaria
2010-02-15
For axially symmetric solutions of Einstein equations there exists a gauge which has the remarkable property that the total mass can be written as a conserved, positive definite, integral on the spacelike slices. The mass integral provides a nonlinear control of the variables along the whole evolution. In this gauge, Einstein equations reduce to a coupled hyperbolic-elliptic system which is formally singular at the axis. As a first step in analyzing this system of equations we study linear perturbations on a flat background. We prove that the linear equations reduce to a very simple system of equations which provide, thoughmore » the mass formula, useful insight into the structure of the full system. However, the singular behavior of the coefficients at the axis makes the study of this linear system difficult from the analytical point of view. In order to understand the behavior of the solutions, we study the numerical evolution of them. We provide strong numerical evidence that the system is well-posed and that its solutions have the expected behavior. Finally, this linear system allows us to formulate a model problem which is physically interesting in itself, since it is connected with the linear stability of black hole solutions in axial symmetry. This model can contribute significantly to solve the nonlinear problem and at the same time it appears to be tractable.« less
Swept Impact Seismic Technique (SIST)
Park, C.B.; Miller, R.D.; Steeples, D.W.; Black, R.A.
1996-01-01
A coded seismic technique is developed that can result in a higher signal-to-noise ratio than a conventional single-pulse method does. The technique is cost-effective and time-efficient and therefore well suited for shallow-reflection surveys where high resolution and cost-effectiveness are critical. A low-power impact source transmits a few to several hundred high-frequency broad-band seismic pulses during several seconds of recording time according to a deterministic coding scheme. The coding scheme consists of a time-encoded impact sequence in which the rate of impact (cycles/s) changes linearly with time providing a broad range of impact rates. Impact times used during the decoding process are recorded on one channel of the seismograph. The coding concept combines the vibroseis swept-frequency and the Mini-Sosie random impact concepts. The swept-frequency concept greatly improves the suppression of correlation noise with much fewer impacts than normally used in the Mini-Sosie technique. The impact concept makes the technique simple and efficient in generating high-resolution seismic data especially in the presence of noise. The transfer function of the impact sequence simulates a low-cut filter with the cutoff frequency the same as the lowest impact rate. This property can be used to attenuate low-frequency ground-roll noise without using an analog low-cut filter or a spatial source (or receiver) array as is necessary with a conventional single-pulse method. Because of the discontinuous coding scheme, the decoding process is accomplished by a "shift-and-stacking" method that is much simpler and quicker than cross-correlation. The simplicity of the coding allows the mechanical design of the source to remain simple. Several different types of mechanical systems could be adapted to generate a linear impact sweep. In addition, the simplicity of the coding also allows the technique to be used with conventional acquisition systems, with only minor modifications.
Niioka, Takenori; Uno, Tsukasa; Yasui-Furukori, Norio; Shimizu, Mikiko; Sugawara, Kazunobu; Tateishi, Tomonori
2006-10-01
The purpose of this study is to evaluate whether a simple formula using limited blood samples can predict the area under the plasma rabeprazole concentration-time curve (AUC) in co-administration with CYP inhibitors. A randomized double-blind placebo-controlled crossover study design in three phases was conducted at intervals of 2 weeks. Twenty-one healthy Japanese volunteers, including three CYP2C19 genotype groups, took a single oral 20-mg dose of rabeprazole after three 6-day pretreatments, i.e., clarithromycin 800 mg/day, fluvoxamine 50 mg/day, and placebo. Prediction formulas of the AUC were derived from pharmacokinetics data of 21 subjects in three phases using multiple linear regression analysis. Ten blood samples were collected over 24 h to calculate AUC. Plasma concentrations of rabeprazole was measured by an HPLC-assay (l.l.q.=1 ng/ml). The AUC was based on all the data sets (n=63). The linear regression using two points (C3 and C6) could predict AUC(0-infinity) precisely, irrespective of CYP2C19 genotypes and CYP inhibitors (AUC(0-infinity)=1.39xC3+7.17xC6+344.14, r (2)=0.825, p<0.001). The present study demonstrated that the AUC of rabeprazole can be estimated by the simple formula using two-point concentrations. This formula can be more accurate for the prediction of AUC estimation than that reflected by CYP2C19 genotypes without any determination, even if there are significant differences for the CYP2C19 genotypes. Therefore, this prediction formula might be useful to evaluate whether CYP2C19 genotypes really reflects the curative effect of rabeprazole.
A generic sun-tracking algorithm for on-axis solar collector in mobile platforms
NASA Astrophysics Data System (ADS)
Lai, An-Chow; Chong, Kok-Keong; Lim, Boon-Han; Ho, Ming-Cheng; Yap, See-Hao; Heng, Chun-Kit; Lee, Jer-Vui; King, Yeong-Jin
2015-04-01
This paper proposes a novel dynamic sun-tracking algorithm which allows accurate tracking of the sun for both non-concentrated and concentrated photovoltaic systems located on mobile platforms to maximize solar energy extraction. The proposed algorithm takes not only the date, time, and geographical information, but also the dynamic changes of coordinates of the mobile platforms into account to calculate the sun position angle relative to ideal azimuth-elevation axes in real time using general sun-tracking formulas derived by Chong and Wong. The algorithm acquires data from open-loop sensors, i.e. global position system (GPS) and digital compass, which are readily available in many off-the-shelf portable gadgets, such as smart phone, to instantly capture the dynamic changes of coordinates of mobile platforms. Our experiments found that a highly accurate GPS is not necessary as the coordinate changes of practical mobile platforms are not fast enough to produce significant differences in the calculation of the incident angle. On the contrary, it is critical to accurately identify the quadrant and angle where the mobile platforms are moving toward in real time, which can be resolved by using digital compass. In our implementation, a noise filtering mechanism is found necessary to remove unexpected spikes in the readings of the digital compass to ensure stability in motor actuations and effectiveness in continuous tracking. Filtering mechanisms being studied include simple moving average and linear regression; the results showed that a compound function of simple moving average and linear regression produces a better outcome. Meanwhile, we found that a sampling interval is useful to avoid excessive motor actuations and power consumption while not sacrificing the accuracy of sun-tracking.
User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods
NASA Technical Reports Server (NTRS)
Desilva, B. M. E.; Medan, R. T.
1979-01-01
Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.
Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions
2007-09-01
C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
A study of the limitations of linear theory methods as applied to sonic boom calculations
NASA Technical Reports Server (NTRS)
Darden, Christine M.
1990-01-01
Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.
An Alternative Derivation of the Energy Levels of the "Particle on a Ring" System
NASA Astrophysics Data System (ADS)
Vincent, Alan
1996-10-01
All acceptable wave functions must be continuous mathematical functions. This criterion limits the acceptable functions for a particle in a linear 1-dimensional box to sine functions. If, however, the linear box is bent round into a ring, acceptable wave functions are those which are continuous at the 'join'. On this model some acceptable linear functions become unacceptable for the ring and some unacceptable cosine functions become acceptable. This approach can be used to produce a straightforward derivation of the energy levels and wave functions of the particle on a ring. These simple wave mechanical systems can be used as models of linear and cyclic delocalised systems such as conjugated hydrocarbons or the benzene ring. The promotion energy of an electron can then be used to calculate the wavelength of absorption of uv light. The simple model gives results of the correct order of magnitude and shows that, as the chain length increases, the uv maximum moves to longer wavelengths, as found experimentally.
NASA Astrophysics Data System (ADS)
Deymier, P. A.; Runge, K.
2018-03-01
A Green's function-based numerical method is developed to calculate the phase of scattered elastic waves in a harmonic model of diatomic molecules adsorbed on the (001) surface of a simple cubic crystal. The phase properties of scattered waves depend on the configuration of the molecules. The configurations of adsorbed molecules on the crystal surface such as parallel chain-like arrays coupled via kinks are used to demonstrate not only linear but also non-linear dependency of the phase on the number of kinks along the chains. Non-linear behavior arises for scattered waves with frequencies in the vicinity of a diatomic molecule resonance. In the non-linear regime, the variation in phase with the number of kinks is formulated mathematically as unitary matrix operations leading to an analogy between phase-based elastic unitary operations and quantum gates. The advantage of elastic based unitary operations is that they are easily realizable physically and measurable.
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
A Practical Model for Forecasting New Freshman Enrollment during the Application Period.
ERIC Educational Resources Information Center
Paulsen, Michael B.
1989-01-01
A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)
Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra
2016-01-01
The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.
Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra
2016-01-01
abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356
Finding all solutions of nonlinear equations using the dual simplex method
NASA Astrophysics Data System (ADS)
Yamamura, Kiyotaka; Fujioka, Tsuyoshi
2003-03-01
Recently, an efficient algorithm has been proposed for finding all solutions of systems of nonlinear equations using linear programming. This algorithm is based on a simple test (termed the LP test) for nonexistence of a solution to a system of nonlinear equations using the dual simplex method. In this letter, an improved version of the LP test algorithm is proposed. By numerical examples, it is shown that the proposed algorithm could find all solutions of a system of 300 nonlinear equations in practical computation time.
Thermodynamic metrics and optimal paths.
Sivak, David A; Crooks, Gavin E
2012-05-11
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
Health monitoring system for transmission shafts based on adaptive parameter identification
NASA Astrophysics Data System (ADS)
Souflas, I.; Pezouvanis, A.; Ebrahimi, K. M.
2018-05-01
A health monitoring system for a transmission shaft is proposed. The solution is based on the real-time identification of the physical characteristics of the transmission shaft i.e. stiffness and damping coefficients, by using a physical oriented model and linear recursive identification. The efficacy of the suggested condition monitoring system is demonstrated on a prototype transient engine testing facility equipped with a transmission shaft capable of varying its physical properties. Simulation studies reveal that coupling shaft faults can be detected and isolated using the proposed condition monitoring system. Besides, the performance of various recursive identification algorithms is addressed. The results of this work recommend that the health status of engine dynamometer shafts can be monitored using a simple lumped-parameter shaft model and a linear recursive identification algorithm which makes the concept practically viable.
Factorization-based texture segmentation
Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.
2015-06-17
This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
From linear to nonlinear control means: a practical progression.
Gao, Zhiqiang
2002-04-01
With the rapid advance of digital control hardware, it is time to take the simple but effective proportional-integral-derivative (PID) control technology to the next level of performance and robustness. For this purpose, a nonlinear PID and active disturbance rejection framework are introduced in this paper. It complements the existing theory in that (1) it actively and systematically explores the use of nonlinear control mechanisms for better performance, even for linear plants; (2) it represents a control strategy that is rather independent of mathematical models of the plants, thus achieving inherent robustness and reducing design complexity. Stability analysis, as well as software/hardware test results, are presented. It is evident that the proposed framework lends itself well in seeking innovative solutions to practical problems while maintaining the simplicity and the intuitiveness of the existing technology.
A thin-walled pressurized sphere exposed to external general corrosion and nonuniform heating
NASA Astrophysics Data System (ADS)
Sedova, Olga S.; Pronina, Yulia G.; Kuchin, Nikolai L.
2018-05-01
A thin-walled spherical shell subjected to simultaneous action of internal and external pressure, nonuniform heating and outside mechanochemical corrosion is considered. It is assumed that the shell is homogeneous, isotropic and linearly elastic. The rate of corrosion is linearly dependent on the equivalent stress, which is the sum of mechanical and temperature stress components. Paper presents a new analytical solution, which takes into account the effect of the internal and external pressure values themselves, not only their difference. At the same time, the new solution has a rather simple form as compared to the results based on the solution to the Lame problem for a thick-walled sphere under pressure. The solution obtained can serve as a benchmark for numerical analysis and for a qualitative forecast of durability of the vessel.
Can we detect a nonlinear response to temperature in European plant phenology?
NASA Astrophysics Data System (ADS)
Jochner, Susanne; Sparks, Tim H.; Laube, Julia; Menzel, Annette
2016-10-01
Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C-1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ˜14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.
NASA Astrophysics Data System (ADS)
Lindstrom, Michael
2017-06-01
Fluid instabilities arise in a variety of contexts and are often unwanted results of engineering imperfections. In one particular model for a magnetized target fusion reactor, a pressure wave is propagated in a cylindrical annulus comprised of a dense fluid before impinging upon a plasma and imploding it. Part of the success of the apparatus is a function of how axially-symmetric the final pressure pulse is upon impacting the plasma. We study a simple model for the implosion of the system to study how imperfections in the pressure imparted on the outer circumference grow due to geometric focusing. Our methodology entails linearizing the compressible Euler equations for mass and momentum conservation about a cylindrically symmetric problem and analysing the perturbed profiles at different mode numbers. The linearized system gives rise to singular shocks and through analysing the perturbation profiles at various times, we infer that high mode numbers are dampened through the propagation. We also study the Linear Klein-Gordon equation in the context of stability of linear cylindrical wave formation whereby highly oscillatory, bounded behaviour is observed in a far field solution.
Linear positioning laser calibration setup of CNC machine tools
NASA Astrophysics Data System (ADS)
Sui, Xiulin; Yang, Congjing
2002-10-01
The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.
A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.
We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ''Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbationsmore » that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.« less
The Elementary Operations of Human Vision Are Not Reducible to Template-Matching
Neri, Peter
2015-01-01
It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be the targeted outcome of carefully crafted nonlinear circuits, rather than default behaviour exhibited by basic components. PMID:26556758
A simple theory of motor protein kinetics and energetics. II.
Qian, H
2000-01-10
A three-state stochastic model of motor protein [Qian, Biophys. Chem. 67 (1997) pp. 263-267] is further developed to illustrate the relationship between the external load on an individual motor protein in aqueous solution with various ATP concentrations and its steady-state velocity. A wide variety of dynamic motor behavior are obtained from this simple model. For the particular case of free-load translocation being the most unfavorable step within the hydrolysis cycle, the load-velocity curve is quasi-linear, V/Vmax = (cF/Fmax-c)/(1-c), in contrast to the hyperbolic relationship proposed by A.V. Hill for macroscopic muscle. Significant deviation from the linearity is expected when the velocity is less than 10% of its maximal (free-load) value--a situation under which the processivity of motor diminishes and experimental observations are less certain. We then investigate the dependence of load-velocity curve on ATP (ADP) concentration. It is shown that the free load Vmax exhibits a Michaelis-Menten like behavior, and the isometric Fmax increases linearly with ln([ATP]/[ADP]). However, the quasi-linear region is independent of the ATP concentration, yielding an apparently ATP-independent maximal force below the true isometric force. Finally, the heat production as a function of ATP concentration and external load are calculated. In simple terms and solved with elementary algebra, the present model provides an integrated picture of biochemical kinetics and mechanical energetics of motor proteins.
NASA Astrophysics Data System (ADS)
Ladiges, Daniel R.; Sader, John E.
2018-05-01
Nanomechanical resonators and sensors, operated in ambient conditions, often generate low-Mach-number oscillating rarefied gas flows. Cercignani [C. Cercignani, J. Stat. Phys. 1, 297 (1969), 10.1007/BF01007482] proposed a variational principle for the linearized Boltzmann equation, which can be used to derive approximate analytical solutions of steady (time-independent) flows. Here we extend and generalize this principle to unsteady oscillatory rarefied flows and thus accommodate resonating nanomechanical devices. This includes a mathematical approach that facilitates its general use and allows for systematic improvements in accuracy. This formulation is demonstrated for two canonical flow problems: oscillatory Couette flow and Stokes' second problem. Approximate analytical formulas giving the bulk velocity and shear stress, valid for arbitrary oscillation frequency, are obtained for Couette flow. For Stokes' second problem, a simple system of ordinary differential equations is derived which may be solved to obtain the desired flow fields. Using this framework, a simple and accurate formula is provided for the shear stress at the oscillating boundary, again for arbitrary frequency, which may prove useful in application. These solutions are easily implemented on any symbolic or numerical package, such as Mathematica or matlab, facilitating the characterization of flows produced by nanomechanical devices and providing insight into the underlying flow physics.
Bhatti, M M; Hanson, G D; Schultz, L
1998-03-01
The Bioanalytical Chemistry Department at the Madison facility of Covance Laboratories, has developed and validated a simple and sensitive method for the simultaneous determination of phenytoin (PHT), carbamazepine (CBZ) and 10,11-carbamazepine epoxide (CBZ-E) in human plasma by high-performance liquid chromatography with 10,11 dihydrocarbamazepine as the internal standard. Acetonitrile was added to plasma samples containing PHT, CBZ and CBZ-E to precipitate the plasma proteins. After centrifugation, the acetonitrile supernatant was transferred to a clean tube and evaporated under N2. The dried sample extract was reconstituted in 0.4 ml of mobile phase and injected for analysis by high-performance liquid chromatography. Separation was achieved on a Spherisorb ODS2 analytical column with a mobile phase of 18:18:70 acetonitrile:methanol:potassium phosphate buffer. Detection was at 210 nm using an ultraviolet detector. The mean retention times of CBZ-E, PHT and CBZ were 5.8, 9.9 and 11.8 min, respectively. Peak height ratios were fit to a least squares linear regression algorithm with a 1/(concentration)2 weighting. The method produces acceptable linearity, precision and accuracy to a minimum concentration of 0.050 micrograms ml-1 in human plasma. It is also simple and convenient, with no observable matrix interferences.
Slackline dynamics and the Helmholtz-Duffing oscillator
NASA Astrophysics Data System (ADS)
Athanasiadis, Panos J.
2018-01-01
Slacklining is a new, rapidly expanding sport, and understanding its physics is paramount for maximizing fun and safety. Yet, compared to other sports, very little has been published so far on slackline dynamics. The equations of motion describing a slackline are fundamentally nonlinear, and assuming linear elasticity, they lead to a form of the Duffing equation. Following this approach, characteristic examples of slackline motion are simulated, including trickline bouncing, leash falls and longline surfing. The time-dependent solutions of the differential equations describing the system are acquired by numerical integration. A simple form of energy dissipation (linear drag) is added in some cases. It is recognized in this study that geometric nonlinearity is a fundamental aspect characterizing the dynamics of slacklines. Sports, and particularly slackline, is an excellent way of engaging young people with physics. A slackline is a simple yet insightful example of a nonlinear oscillator. It is very easy to model in the laboratory, as well as to rig and try on a university campus. For instructive purposes, its behaviour can be explored by numerically integrating the respective equations of motion. A form of the Duffing equation emerges naturally in the analysis and provides a powerful introduction to nonlinear dynamics. The material is suitable for graduate students and undergraduates with a background in classical mechanics and differential equations.
Amplitude Frequency Response Measurement: A Simple Technique
ERIC Educational Resources Information Center
Satish, L.; Vora, S. C.
2010-01-01
A simple method is described to combine a modern function generator and a digital oscilloscope to configure a setup that can directly measure the amplitude frequency response of a system. This is achieved by synchronously triggering both instruments, with the function generator operated in the "Linear-Sweep" frequency mode, while the oscilloscope…
Simple reaction time to the onset of time-varying sounds.
Schlittenlacher, Josef; Ellermeier, Wolfgang
2015-10-01
Although auditory simple reaction time (RT) is usually defined as the time elapsing between the onset of a stimulus and a recorded reaction, a sound cannot be specified by a single point in time. Therefore, the present work investigates how the period of time immediately after onset affects RT. By varying the stimulus duration between 10 and 500 msec, this critical duration was determined to fall between 32 and 40 milliseconds for a 1-kHz pure tone at 70 dB SPL. In a second experiment, the role of the buildup was further investigated by varying the rise time and its shape. The increment in RT for extending the rise time by a factor of ten was about 7 to 8 msec. There was no statistically significant difference in RT between a Gaussian and linear rise shape. A third experiment varied the modulation frequency and point of onset of amplitude-modulated tones, producing onsets at different initial levels with differently rapid increase or decrease immediately afterwards. The results of all three experiments results were explained very well by a straightforward extension of the parallel grains model (Miller and Ulrich Cogn. Psychol. 46, 101-151, 2003), a probabilistic race model employing many parallel channels. The extension of the model to time-varying sounds made the activation of such a grain depend on intensity as a function of time rather than a constant level. A second approach by mechanisms known from loudness produced less accurate predictions.
Optimal control of LQR for discrete time-varying systems with input delays
NASA Astrophysics Data System (ADS)
Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng
2018-04-01
In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.
Time-evolving bubbles in two-dimensional stokes flow
NASA Technical Reports Server (NTRS)
Tanveer, Saleh; Vasconcelos, Giovani L.
1994-01-01
A general class of exact solutions is presented for a time evolving bubble in a two-dimensional slow viscous flow in the presence of surface tension. These solutions can describe a bubble in a linear shear flow as well as an expanding or contracting bubble in an otherwise quiescent flow. In the case of expanding bubbles, the solutions have a simple behavior in the sense that for essentially arbitrary initial shapes the bubble will asymptote an expanding circle. Contracting bubbles, on the other hand, can develop narrow structures ('near-cusps') on the interface and may undergo 'break up' before all the bubble-fluid is completely removed. The mathematical structure underlying the existence of these exact solutions is also investigated.
The development of a Kalman filter clock predictor
NASA Technical Reports Server (NTRS)
Davis, John A.; Greenhall, Charles A.; Boudjemaa, Redoane
2005-01-01
A Kalman filter based clock predictor is developed, and its performance evaluated using both simulated and real data. The clock predictor is shown to possess a neat to optimal Prediction Error Variance (PEV) when the underlying noise consists of one of the power law noise processes commonly encountered in time and frequency measurements. The predictor's performance is the presence of multiple noise processes is also examined. The relationship between the PEV obtained in the presence of multiple noise processes and those obtained for the individual component noise processes is examined. Comparisons are made with a simple linear clock predictor. The clock predictor is used to predict future values of the time offset between pairs of NPL's active hydrogen masers.
On the analytic lunar and solar perturbations of a near earth satellite
NASA Technical Reports Server (NTRS)
Estes, R. H.
1972-01-01
The disturbing function of the moon (sun) is expanded as a sum of products of two harmonic functions, one depending on the position of the satellite and the other on the position of the moon (sun). The harmonic functions depending on the position of the perturbing body are developed into trigonometric series with the ecliptic elements l, l', F, D, and Gamma of the lunar theory which are nearly linear with respect to time. Perturbation of elements are in the form of trigonometric series with the ecliptic lunar elements and the equatorial elements omega and Omega of the satellite so that analytic integration is simple and the results accurate over a long period of time.
Exact solution of a ratchet with switching sawtooth potential
NASA Astrophysics Data System (ADS)
Saakian, David B.; Klümper, Andreas
2018-01-01
We consider the flashing potential ratchet model with general asymmetric potential. Using Bloch functions, we derive equations which allow for the calculation of both the ratchet's flux and higher moments of distribution for rather general potentials. We indicate how to derive the optimal transition rates for maximal velocity of the ratchet. We calculate explicitly the exact velocity of a ratchet with simple sawtooth potential from the solution of a system of 8 linear algebraic equations. Using Bloch functions, we derive the equations for the ratchet with potentials changing periodically with time. We also consider the case of the ratchet with evolution with two different potentials acting for some random periods of time.
A simple and rapid microplate assay for glycoprotein-processing glycosidases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, M.S.; Zwolshen, J.H.; Harry, B.S.
1989-08-15
A simple and convenient microplate assay for glycosidases involved in the glycoprotein-processing reactions is described. The assay is based on specific binding of high-mannose-type oligosaccharide substrates to concanavalin A-Sepharose, while monosaccharides liberated by enzymatic hydrolysis do not bind to concanavalin A-Sepharose. By the use of radiolabeled substrates (( 3H)glucose for glucosidases and (3H)mannose for mannosidases), the radioactivity in the liberated monosaccharides can be determined as a measure of the enzymatic activity. This principle was employed earlier for developing assays for glycosidases previously reported. These authors have reported the separation of substrate from the product by concanavalin A-Sepharose column chromatography. Thismore » procedure is handicapped by the fact that it cannot be used for a large number of samples and is time consuming. We have simplified this procedure and adapted it to the use of a microplate (96-well plate). This would help in processing a large number of samples in a short time. In this report we show that the assay is comparable to the column assay previously reported. It is linear with time and enzyme concentration and shows expected kinetics with castanospermine, a known inhibitor of alpha-glucosidase I.« less
Unequal-Arm Interferometry and Ranging in Space
NASA Technical Reports Server (NTRS)
Tinto, Massimo
2005-01-01
Space-borne interferometric gravitational wave detectors, sensitive in the low-frequency (millihertz) band, will fly in the next decade. In these detectors the spacecraft-to-spacecraft light-traveltimes will necessarily be unequal, time-varying, and (due to aberration) have different time delays on up- and down-links. By using knowledge of the inter-spacecraft light-travel-times and their time evolution it is possible to cancel in post-processing the otherwise dominant laser phase noise and obtain a variety of interferometric data combinations sensitive to gravitational radiation. This technique, which has been named Time-Delay Interferometry (TDI), can be implemented with constellations of three or more formation-flying spacecraft that coherently track each other. As an example application we consider the Laser Interferometer Space Antenna (LISA) mission and show that TDI combinations can be synthesized by properly time-shifting and linearly combining the phase measurements performed on board the three spacecraft. Since TDI exactly suppresses the laser noises when the delays coincide with the light-travel-times, we then show that TDI can also be used for estimating the time-delays needed for its implementation. This is done by performing a post-processing non-linear minimization procedure, which provides an effective, powerful, and simple way for making measurements of the inter-spacecraft light-travel-times. This processing technique, named Time-Delay Interferometric Ranging (TDIR), is highly accurate in estimating the time-delays and allows TDI to be successfully implemented without the need of a dedicated ranging subsystem.
Selvaraj, P; Sakthivel, R; Kwon, O M
2018-06-07
This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Xiong, Shan; Li, Jinglai; Zhu, Xiuqing; Wang, Xiaoying; Lü, Guiyuan; Zhang, Zhenqing
2014-03-01
A sensitive, simple and specific high performance liquid chromatography-electrospray ionization tandem mass spectrometry (LC-MS/MS) method was developed for the determination of morroniside in the plasma of beagles administered via intragastric (ig) doses of morroniside. The method employed paeoniflorin as the internal standard and extracted by simple protein precipitation. The separation was achieved using an Inertsil ODS-SP column (50 mm x 2.1 mm, 5 microm) with mobile phases of 1 mmol/L sodium formate aqueous solution and acetonitrile (gradient elution) at a flow rate of 0.4 mL/min. The detection was accomplished by a mass spectrometer using multiple reaction monitoring (MRM) in positive mode. Pharmacokinetic parameters were fitted by software DAS 2.0. The methodological study showed a good linear relationship of 2-5 000 microg/L (r = 0.996 6) with a sensitivity of 2 microg/L as the limit of quantification. The precision, accuracy, mean recoveries and the matrix effects were satisfied with the requirements of biological sample measurement. The method described above was successfully applied to the pharmacokinetic study of morroniside in the beagle plasma samples. The area under the plasma concentration-time curves (AUC(0-infinity)) of morroniside after single ig administration doses of 5, 15 and 45 mg/kg were (1 631.20 +/- 238.50), (3 984.05 +/- 750.38) and (10 397.64 +/- 3 156.34) microg/L x h. The relationship between dose and AUC showed a good linearity. The pharmacokinetic property of morroniside was proposed to be linear pharmacokinetics.
Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?
NASA Astrophysics Data System (ADS)
Halide, Halmar
2017-01-01
We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.
Richardson, Magnus J E
2007-08-01
Integrate-and-fire models are mainstays of the study of single-neuron response properties and emergent states of recurrent networks of spiking neurons. They also provide an analytical base for perturbative approaches that treat important biological details, such as synaptic filtering, synaptic conductance increase, and voltage-activated currents. Steady-state firing rates of both linear and nonlinear integrate-and-fire models, receiving fluctuating synaptic drive, can be calculated from the time-independent Fokker-Planck equation. The dynamic firing-rate response is less easy to extract, even at the first-order level of a weak modulation of the model parameters, but is an important determinant of neuronal response and network stability. For the linear integrate-and-fire model the response to modulations of current-based synaptic drive can be written in terms of hypergeometric functions. For the nonlinear exponential and quadratic models no such analytical forms for the response are available. Here it is demonstrated that a rather simple numerical method can be used to obtain the steady-state and dynamic response for both linear and nonlinear models to parameter modulation in the presence of current-based or conductance-based synaptic fluctuations. To complement the full numerical solution, generalized analytical forms for the high-frequency response are provided. A special case is also identified--time-constant modulation--for which the response to an arbitrarily strong modulation can be calculated exactly.
Zhao, Jing; Jin, Yan; Shin, Yujin; Jeong, Kyung Min; Lee, Jeongmi
2016-03-01
Here we describe a simple and sensitive analytical method for the enantioselective quantification of fluoxetine in mouse serum using ultra high performance liquid chromatography with quadrupole time-of-flight mass spectrometry. The sample preparation method included a simple deproteinization with acetonitrile in 50 μL of serum, followed by derivatization of the extracts in 50 μL of 2 mM 1R-(-)-menthyl chloroformate at 45ºC for 55 min. These conditions were statistically optimized through response surface methodology using a central composite design. Under the optimized conditions, neither racemization nor kinetic resolution occurred. The derivatized diastereomers were readily resolved on a conventional sub-2 μm C18 column under a simple gradient elution of aqueous methanol containing 0.1% formic acid. The established method was validated and found to be linear, precise, and accurate over the concentration range of 5.0-1000.0 ng/mL for both R and S enantiomers (r(2) > 0.993). Stability tests of the prepared samples at three different concentration levels showed that the R- and S-fluoxetine derivatives were relatively stable for 48 h. No significant matrix effects were observed. Last, the developed method was successfully used for enantiomeric analysis of real serum samples collected at a number of time points from mice administered with racemic fluoxetine. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kwon, Sun-Myung; Shin, Ho-Sang
2015-08-14
A simple and convenient method to detect fluoride in biological samples was developed. This method was based on derivatization with 2-(bromomethyl)naphthalene, headspace solid phase microextraction (HS-SPME) in a vial, and gas chromatography-tandem mass spectrometric detection. The HS-SPME parameters were optimized as follows: selection of CAR/PDMS fiber, 0.5% 2-(bromomethyl)naphthalene, 250 mg/L 15-crown-5-ether as a phase transfer catalyst, extraction and derivatization temperature of 95 °C, heating time of 20 min and pH of 7.0. Under the established conditions, the lowest limits of detection were 9 and 11 μg/L in 1.0 ml of plasma and urine, respectively, and the intra- and inter-day relative standard deviation was less than 7.7% at concentrations of 0.1 and 1.0 mg/L. The calibration curve showed good linearity of plasma and urine with r=0.9990 and r=0.9992, respectively. This method is simple, amenable to automation and environmentally friendly. Copyright © 2015 Elsevier B.V. All rights reserved.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Transportable Maps Software. Volume I.
1982-07-01
being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records
Shaikh, Abdul S; Guo, Ruichen
2017-01-01
Phenytoin has very challenging pharmacokinetic properties. To prevent its toxicity and ensure efficacy, continuous therapeutic monitoring is required. It is hard to get a simple, accurate, rapid, easily available, economical and highly sensitive assay in one method for therapeutic monitoring of phenytoin. The present study is directed towards establishing and validating a simpler, rapid, an accurate, highly sensitive, novel and environment friendly liquid chromatography/mass spectrometry (LC/MS) method for offering rapid and reliable TDM results of phenytoin in epileptic patients to physicians and clinicians for making immediate and rational decision. 27 epileptics patients with uncontrolled seizures or suspected of non-compliance or toxicity of phenytoin were selected and advised for TDM of phenytoin by neurologists of Qilu Hospital Jinan, China. The LC/MS assay was used for performing of therapeutic monitoring of phenytoin. The Agilent 1100 LC/MS system was used for TDM. The mixture of Ammonium acetate 5mM: Methanol at (35: 65 v/v) was used for the composition of mobile phase. The Diamonsil C18 (150mm×4.6mm, 5μm) column was used for the extraction of analytes in plasma. The samples were prepared with one step simple protein precipitation method. The technique was validated with the guidelines of International Conference on Harmonisation (ICH). The calibration curve demonstrated decent linearity within (0.2-20 µg/mL) concentration range with linearity equation, y= 0.0667855 x +0.00241785 and correlation coefficient (R2) of 0.99928. The specificity, recovery, linearity, accuracy, precision and stability results were within the accepted limits. The concentration of 0.2 µg/mL was observed as lower limit of quantitation (LLOQ), which is 12.5 times lower than the currently available enzyme-multiplied immunoassay technique (EMIT) for measurement of phenytoin in epilepsy patients. A rapid, simple, economical, precise, highly sensitive and novel LC/MS assay has been established, validated and applied successfully in TDM of 27 epileptics patients. It was alarmingly found that TDM results of all these patients were out of safe range except two patients. However, it needs further evaluation. Besides TDM, the stated method can also be applied in bioequivalence, pharmacokinetics, toxicokinetics and pharmacovigilance studies. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, Adam S.
2011-05-05
In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less
A framework for studying transient dynamics of population projection matrix models.
Stott, Iain; Townley, Stuart; Hodgson, David James
2011-09-01
Empirical models are central to effective conservation and population management, and should be predictive of real-world dynamics. Available modelling methods are diverse, but analysis usually focuses on long-term dynamics that are unable to describe the complicated short-term time series that can arise even from simple models following ecological disturbances or perturbations. Recent interest in such transient dynamics has led to diverse methodologies for their quantification in density-independent, time-invariant population projection matrix (PPM) models, but the fragmented nature of this literature has stifled the widespread analysis of transients. We review the literature on transient analyses of linear PPM models and synthesise a coherent framework. We promote the use of standardised indices, and categorise indices according to their focus on either convergence times or transient population density, and on either transient bounds or case-specific transient dynamics. We use a large database of empirical PPM models to explore relationships between indices of transient dynamics. This analysis promotes the use of population inertia as a simple, versatile and informative predictor of transient population density, but criticises the utility of established indices of convergence times. Our findings should guide further development of analyses of transient population dynamics using PPMs or other empirical modelling techniques. © 2011 Blackwell Publishing Ltd/CNRS.
Localized light waves: Paraxial and exact solutions of the wave equation (a review)
NASA Astrophysics Data System (ADS)
Kiselev, A. P.
2007-04-01
Simple explicit localized solutions are systematized over the whole space of a linear wave equation, which models the propagation of optical radiation in a linear approximation. Much attention has been paid to exact solutions (which date back to the Bateman findings) that describe wave beams (including Bessel-Gauss beams) and wave packets with a Gaussian localization with respect to the spatial variables and time. Their asymptotics with respect to free parameters and at large distances are presented. A similarity between these exact solutions and harmonic in time fields obtained in the paraxial approximation based on the Leontovich-Fock parabolic equation has been studied. Higher-order modes are considered systematically using the separation of variables method. The application of the Bateman solutions of the wave equation to the construction of solutions to equations with dispersion and nonlinearity and their use in wavelet analysis, as well as the summation of Gaussian beams, are discussed. In addition, solutions localized at infinity known as the Moses-Prosser “acoustic bullets”, as well as their harmonic in time counterparts, “ X waves”, waves from complex sources, etc., have been considered. Everywhere possible, the most elementary mathematical formalism is used.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
A new real-time guidance strategy for aerodynamic ascent flight
NASA Astrophysics Data System (ADS)
Yamamoto, Takayuki; Kawaguchi, Jun'ichiro
2007-12-01
Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.
Efficient multidimensional regularization for Volterra series estimation
NASA Astrophysics Data System (ADS)
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Scaling and efficiency determine the irreversible evolution of a market
Baldovin, F.; Stella, A. L.
2007-01-01
In setting up a stochastic description of the time evolution of a financial index, the challenge consists in devising a model compatible with all stylized facts emerging from the analysis of financial time series and providing a reliable basis for simulating such series. Based on constraints imposed by market efficiency and on an inhomogeneous-time generalization of standard simple scaling, we propose an analytical model which accounts simultaneously for empirical results like the linear decorrelation of successive returns, the power law dependence on time of the volatility autocorrelation function, and the multiscaling associated to this dependence. In addition, our approach gives a justification and a quantitative assessment of the irreversible character of the index dynamics. This irreversibility enters as a key ingredient in a novel simulation strategy of index evolution which demonstrates the predictive potential of the model.
Helicons in uniform fields. I. Wave diagnostics with hodograms
NASA Astrophysics Data System (ADS)
Urrutia, J. M.; Stenzel, R. L.
2018-03-01
The wave equation for whistler waves is well known and has been solved in Cartesian and cylindrical coordinates, yielding plane waves and cylindrical waves. In space plasmas, waves are usually assumed to be plane waves; in small laboratory plasmas, they are often assumed to be cylindrical "helicon" eigenmodes. Experimental observations fall in between both models. Real waves are usually bounded and may rotate like helicons. Such helicons are studied experimentally in a large laboratory plasma which is essentially a uniform, unbounded plasma. The waves are excited by loop antennas whose properties determine the field rotation and transverse dimensions. Both m = 0 and m = 1 helicon modes are produced and analyzed by measuring the wave magnetic field in three dimensional space and time. From Ampère's law and Ohm's law, the current density and electric field vectors are obtained. Hodograms for these vectors are produced. The sign ambiguity of the hodogram normal with respect to the direction of wave propagation is demonstrated. In general, electric and magnetic hodograms differ but both together yield the wave vector direction unambiguously. Vector fields of the hodogram normal yield the phase flow including phase rotation for helicons. Some helicons can have locally a linear polarization which is identified by the hodogram ellipticity. Alternatively the amplitude oscillation in time yields a measure for the wave polarization. It is shown that wave interference produces linear polarization. These observations emphasize that single point hodogram measurements are inadequate to determine the wave topology unless assuming plane waves. Observations of linear polarization indicate wave packets but not plane waves. A simple qualitative diagnostics for the wave polarization is the measurement of the magnetic field magnitude in time. Circular polarization has a constant amplitude; linear polarization results in amplitude modulations.
NASA Astrophysics Data System (ADS)
Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki
2002-05-01
To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
Stability Limits of a PD Controller for a Flywheel Supported on Rigid Rotor and Magnetic Bearings
NASA Technical Reports Server (NTRS)
Kascak, Albert F.; Brown, Gerald V.; Jansen, Ralph H.; Dever, TImothy P.
2006-01-01
Active magnetic bearings are used to provide a long-life, low-loss suspension of a high-speed flywheel rotor. This paper describes a modeling effort used to understand the stability boundaries of the PD controller used to control the active magnetic bearings on a high speed test rig. Limits of stability are described in terms of allowable stiffness and damping values which result in stable levitation of the nonrotating rig. Small signal stability limits for the system is defined as a nongrowth in vibration amplitude of a small disturbance. A simple mass-force model was analyzed. The force resulting from the magnetic bearing was linearized to include negative displacement stiffness and a current stiffness. The current stiffness was then used in a PD controller. The phase lag of the control loop was modeled by a simple time delay. The stability limits and the associated vibration frequencies were measured and compared to the theoretical values. The results show a region on stiffness versus damping plot that have the same qualitative tendencies as experimental measurements. The resulting stability model was then extended to a flywheel system. The rotor dynamics of the flywheel was modeled using a rigid rotor supported on magnetic bearings. The equations of motion were written for the center of mass and a small angle linearization of the rotations about the center of mass. The stability limits and the associated vibration frequencies were found as a function of nondimensional magnetic bearing stiffness and damping and nondimensional parameters of flywheel speed and time delay.
Farsa, Oldřich
2013-01-01
The log BB parameter is the logarithm of the ratio of a compound's equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k'. The second aim was to estimate the brain's absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k' were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism.
NASA Astrophysics Data System (ADS)
Vincenzo, F.; Matteucci, F.; Spitoni, E.
2017-04-01
We present a theoretical method for solving the chemical evolution of galaxies by assuming an instantaneous recycling approximation for chemical elements restored by massive stars and the delay time distribution formalism for delayed chemical enrichment by Type Ia Supernovae. The galaxy gas mass assembly history, together with the assumed stellar yields and initial mass function, represents the starting point of this method. We derive a simple and general equation, which closely relates the Laplace transforms of the galaxy gas accretion history and star formation history, which can be used to simplify the problem of retrieving these quantities in the galaxy evolution models assuming a linear Schmidt-Kennicutt law. We find that - once the galaxy star formation history has been reconstructed from our assumptions - the differential equation for the evolution of the chemical element X can be suitably solved with classical methods. We apply our model to reproduce the [O/Fe] and [Si/Fe] versus [Fe/H] chemical abundance patterns as observed at the solar neighbourhood by assuming a decaying exponential infall rate of gas and different delay time distributions for Type Ia Supernovae; we also explore the effect of assuming a non-linear Schmidt-Kennicutt law, with the index of the power law being k = 1.4. Although approximate, we conclude that our model with the single-degenerate scenario for Type Ia Supernovae provides the best agreement with the observed set of data. Our method can be used by other complementary galaxy stellar population synthesis models to predict also the chemical evolution of galaxies.
Blood coagulation screening using a paper-based microfluidic lateral flow device.
Li, H; Han, D; Pauletti, G M; Steckl, A J
2014-10-21
A simple approach to the evaluation of blood coagulation using a microfluidic paper-based lateral flow assay (LFA) device for point-of-care (POC) and self-monitoring screening is reported. The device utilizes whole blood, without the need for prior separation of plasma from red blood cells (RBC). Experiments were performed using animal (rabbit) blood treated with trisodium citrate to prevent coagulation. CaCl2 solutions of varying concentrations are added to citrated blood, producing Ca(2+) ions to re-establish the coagulation cascade and mimic different blood coagulation abilities in vitro. Blood samples are dispensed into a paper-based LFA device consisting of sample pad, analytical membrane and wicking pad. The porous nature of the cellulose membrane separates the aqueous plasma component from the large blood cells. Since the viscosity of blood changes with its coagulation ability, the distance RBCs travel in the membrane in a given time can be related to the blood clotting time. The distance of the RBC front is found to decrease linearly with increasing CaCl2 concentration, with a travel rate decreasing from 3.25 mm min(-1) for no added CaCl2 to 2.2 mm min(-1) for 500 mM solution. Compared to conventional plasma clotting analyzers, the LFA device is much simpler and it provides a significantly larger linear range of measurement. Using the red colour of RBCs as a visible marker, this approach can be utilized to produce a simple and clear indicator of whether the blood condition is within the appropriate range for the patient's condition.
A study of helicopter stability and control including blade dynamics
NASA Technical Reports Server (NTRS)
Zhao, Xin; Curtiss, H. C., Jr.
1988-01-01
A linearized model of rotorcraft dynamics has been developed through the use of symbolic automatic equation generating techniques. The dynamic model has been formulated in a unique way such that it can be used to analyze a variety of rotor/body coupling problems including a rotor mounted on a flexible shaft with a number of modes as well as free-flight stability and control characteristics. Direct comparison of the time response to longitudinal, lateral and directional control inputs at various trim conditions shows that the linear model yields good to very good correlation with flight test. In particular it is shown that a dynamic inflow model is essential to obtain good time response correlation, especially for the hover trim condition. It also is shown that the main rotor wake interaction with the tail rotor and fixed tail surfaces is a significant contributor to the response at translational flight trim conditions. A relatively simple model for the downwash and sidewash at the tail surfaces based on flat vortex wake theory is shown to produce good agreement. Then, the influence of rotor flap and lag dynamics on automatic control systems feedback gain limitations is investigated with the model. It is shown that the blade dynamics, especially lagging dynamics, can severly limit the useable values of the feedback gain for simple feedback control and that multivariable optimal control theory is a powerful tool to design high gain augmentation control system. The frequency-shaped optimal control design can offer much better flight dynamic characteristics and a stable margin for the feedback system without need to model the lagging dynamics.
Investigations in site response from ground motion observations in vertical arrays
NASA Astrophysics Data System (ADS)
Baise, Laurie Gaskins
The aim of the research is to improve the understanding of earthquake site response and to improve the techniques available to investigate issues in this field. Vertical array ground motion data paired with the empirical transfer function (ETF) methodology is shown to accurately characterize site response. This manuscript draws on methods developed in the field of signal processing and statistical time series analysis to parameterize the ETF as an autoregressive moving-average (ARMA) system which is justified theoretically, historically, and by example. Site response is evaluated at six sites in California, Japan, and Taiwan using ETF estimates, correlation analysis, and full waveform modeling. Correlation analysis is proposed as a required data quality evaluation imperative to any subsequent site response analysis. ETF estimates and waveform modeling are used to decipher the site response at sites with simple and complex geologic structure, which provide simple time-invariant and time-variant methods for evaluating both linear site transfer functions and nonlinear site response for sites experiencing liquefaction of the soils. The Treasure and Yerba Buena Island sites, however, require 2-D waveform modeling to accurately evaluate the effects of the shallow sedimentary basin. ETFs are used to characterize the Port Island site and corresponding shake table tests before, during, and after liquefaction. ETFs derived from the shake table tests were demonstrated to consistently predict the linear field ground response below 16 m depth and the liquefied behavior above 15 m depth. The liquefied interval response was demonstrated to gradually return to pre-liquefied conditions within several weeks of the 1995 Hyogo-ken Nanbu earthquake. Both the site's and the shake table test's response were shown to be effectively linear up to 0.5 g in the native materials below 16 m depth. The effective linearity of the site response at GVDA, Chiba, and Lotting up to 0.1 g, 0.33 g, and 0.49 g, respectively, further confirms that site response in the field may be more linear than expected from laboratory tests. Strong motions were predicted at these sites with normalized mean square error less than 0.10 using ETFs generated from weak motions. The Treasure Island site response was shown to be dominated by surface waves propagating in the shallow sediments of the San Francisco Bay. Low correlation of the ground motions recorded on rock at Yerba Buena Island and in rock beneath the Treasure Island site intimates that the Yerba Buena site is an inappropriate reference site for Treasure Island site response studies. Accurate simulation of the Treasure Island site response was achieved using a 2-D velocity structure comprised of a 100 m uniform soil basin (Vs = 400 m/s) over a weathered rock veneer (Vs = 1.5 km/s) to 200 m depth.
Meteorological adjustment of yearly mean values for air pollutant concentration comparison
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Neustadter, H. E.
1976-01-01
Using multiple linear regression analysis, models which estimate mean concentrations of Total Suspended Particulate (TSP), sulfur dioxide, and nitrogen dioxide as a function of several meteorologic variables, two rough economic indicators, and a simple trend in time are studied. Meteorologic data were obtained and do not include inversion heights. The goodness of fit of the estimated models is partially reflected by the squared coefficient of multiple correlation which indicates that, at the various sampling stations, the models accounted for about 23 to 47 percent of the total variance of the observed TSP concentrations. If the resulting model equations are used in place of simple overall means of the observed concentrations, there is about a 20 percent improvement in either: (1) predicting mean concentrations for specified meteorological conditions; or (2) adjusting successive yearly averages to allow for comparisons devoid of meteorological effects. An application to source identification is presented using regression coefficients of wind velocity predictor variables.
A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks
Schaffer, Evan S.; Ostojic, Srdjan; Abbott, L. F.
2013-01-01
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons. PMID:24204236
[Health for All-Italia: an indicator system on health].
Burgio, Alessandra; Crialesi, Roberta; Loghi, Marzia
2003-01-01
The Health for All - Italia information system collects health data from several sources. It is intended to be a cornerstone for the achievement of an overview about health in Italy. Health is analyzed at different levels, ranging from health services, health needs, lifestyles, demographic, social, economic and environmental contexts. The database associated software allows to pin down statistical data into graphs and tables, and to carry out simple statistical analysis. It is therefore possible to view the indicators' time series, make simple projections and compare the various indicators over the years for each territorial unit. This is possible by means of tables, graphs (histograms, line graphs, frequencies, linear regression with calculation of correlation coefficients, etc) and maps. These charts can be exported to other programs (i.e. Word, Excel, Power Point), or they can be directly printed in color or black and white.
NASA Technical Reports Server (NTRS)
Smith, Samantha A.; DelGenio, Anthony D.
1999-01-01
Ways to determine the turbulence intensity and the horizontal variability in cirrus clouds have been investigated using FIRE-II aircraft, radiosonde and radar data. Higher turbulence intensities were found within some, but not all, of the neutrally stratified layers. It was also demonstrated that the stability of cirrus layers with high extinction values decrease in time, possibly as a result of radiative destabilization. However, these features could not be directly related to each other in any simple manner. A simple linear relationship was observed between the amount of horizontal variability in the ice water content and its average value. This was also true for the extinction and ice crystal number concentrations. A relationship was also suggested between the variability in cloud depth and the environmental stability across the depth of the cloud layer, which requires further investigation.
Long-term forecasting of internet backbone traffic.
Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe
2005-09-01
We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.
Dynamic Monitoring of Cleanroom Fallout Using an Air Particle Counter
NASA Technical Reports Server (NTRS)
Perry, Radford
2011-01-01
The particle fallout limitations and periodic allocations for the James Webb Space Telescope are very stringent. Standard prediction methods are complicated by non-linearity and monitoring methods that are insufficiently responsive. A method for dynamically predicting the particle fallout in a cleanroom using air particle counter data was determined by numerical correlation. This method provides a simple linear correlation to both time and air quality, which can be monitored in real time. The summation of effects provides the program better understanding of the cleanliness and assists in the planning of future activities. Definition of fallout rates within a cleanroom during assembly and integration of contamination-sensitive hardware, such as the James Webb Space Telescope, is essential for budgeting purposes. Balancing the activity levels for assembly and test with the particle accumulation rate is paramount. The current approach to predicting particle fallout in a cleanroom assumes a constant air quality based on the rated class of a cleanroom, with adjustments for projected work or exposure times. Actual cleanroom class can also depend on the number of personnel present and the type of activities. A linear correlation of air quality and normalized particle fallout was determined numerically. An air particle counter (standard cleanroom equipment) can be used to monitor the air quality on a real-time basis and determine the "class" of the cleanroom (per FED-STD-209 or ISO-14644). The correlation function provides an area coverage coefficient per class-hour of exposure. The prediction of particle accumulations provides scheduling inputs for activity levels and cleanroom class requirements.
Abbes, Ilham Ben; Richard, Pierre-Yves; Lefebvre, Marie-Anne; Guilhem, Isabelle; Poirier, Jean-Yves
2013-05-01
Most closed-loop insulin delivery systems rely on model-based controllers to control the blood glucose (BG) level. Simple models of glucose metabolism, which allow easy design of the control law, are limited in their parametric identification from raw data. New control models and controllers issued from them are needed. A proportional integral derivative with double phase lead controller was proposed. Its design was based on a linearization of a new nonlinear control model of the glucose-insulin system in type 1 diabetes mellitus (T1DM) patients validated with the University of Virginia/Padova T1DM metabolic simulator. A 36 h scenario, including six unannounced meals, was tested in nine virtual adults. A previous trial database has been used to compare the performance of our controller with their previous results. The scenario was repeated 25 times for each adult in order to take continuous glucose monitoring noise into account. The primary outcome was the time BG levels were in target (70-180 mg/dl). Blood glucose values were in the target range for 77% of the time and below 50 mg/dl and above 250 mg/dl for 0.8% and 0.3% of the time, respectively. The low blood glucose index and high blood glucose index were 1.65 and 3.33, respectively. The linear controller presented, based on the linearization of a new easily identifiable nonlinear model, achieves good glucose control with low exposure to hypoglycemia and hyperglycemia. © 2013 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe
2018-01-01
In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.
GRACE Accelerometer data transplant
NASA Astrophysics Data System (ADS)
Bandikova, T.; McCullough, C. M.; Kruizinga, G. L. H.
2017-12-01
The Gravity Recovery and Climate Experiment (GRACE) has recently celebrated its 15th anniversary. The aging of the satellites brings along new challenges for both mission operation and science data delivery. Since September 2016, the accelerometer (ACC) onboard GRACE-B has been permanently turned off in order to reduce the battery load. The absence of the information about the non-gravitational forces acting on the spacecraft dramatically decreases the accuracy of the monthly gravity field solutions. The missing GRACE-B accelerometer data, however, can be recovered from the GRACE-A accelerometer measurement with satisfactory accuracy. In the current GRACE data processing, simple ACC data transplant is used which includes only attitude and time correction. The full ACC data transplant, however, requires not only the attitude and time correction, but also modeling of the residual accelerations due to thruster firings, which is the most challenging part. The residual linear accelerations ("thruster spikes") are caused by thruster imperfections such as misalignment of thruster pair, force imbalance or differences in reaction time. The thruster spikes are one of the most dominant high-frequency signals in the ACC measurement. The shape and amplitude of the thruster spikes are unique for each thruster pair, for each firing duration (30 ms - 1000 ms), for each x,y,z component of the ACC linear acceleration, and for each spacecraft. In our approach, the thruster spike model is an analytical function obtained by inverse Laplace transform of the ACC transfer function. The model shape parameters (amplitude, width and time delay) are estimated using Least squares method. The ACC data transplant is validated for days when ACC data from both satellites were available. The fully transplanted data fits the original GRACE-B measurement very well. The full ACC data transplant results in significantly reduced high frequency noise compared to the simple ACC transplant (i.e. without thruster spike modeling). The full ACC data transplant is a promising solution, which will allow GRACE to deliver high quality science data despite the serious problems related to satellite aging.
Linearization of the bradford protein assay.
Ernst, Orna; Zor, Tsaffrir
2010-04-12
Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.
Quantum monodromy and quantum phase transitions in floppy molecules
NASA Astrophysics Data System (ADS)
Larese, Danielle
2012-10-01
A simple algebraic Hamiltonian has been used to explore the vibrational and rotational spectra of the skeletal bending modes of HCNO, BrCNO, NCNCS, and other "floppy" (quasi-linear or quasi-bent) molecules. These molecules have large-amplitude, low-energy bending modes and champagne-bottle potential surfaces, making them good candidates for observing quantum phase transitions (QPT). We describe the geometric phase transitions from bent to linear in these and other non-rigid molecules, quantitatively analyzing the spectroscopic signatures of ground state QPT, excited state QPT, and quantum monodromy. The algebraic framework is ideal for this work because of its small calculational effort yet robust results. Although these methods have historically found success with tri-and four-atomic molecules, we now address five-atomic and simple branched molecules such as CH3NCO and GeH3NCO. Extraction of potential functions are completed for several molecules, resulting in predictions of barriers to linearity and equilibrium bond angles.
Ball-morph: definition, implementation, and comparative evaluation.
Whited, Brian; Rossignac, Jaroslaw Jarek
2011-06-01
We define b-compatibility for planar curves and propose three ball morphing techniques between pairs of b-compatible curves. Ball-morphs use the automatic ball-map correspondence, proposed by Chazal et al., from which we derive different vertex trajectories (linear, circular, and parabolic). All three morphs are symmetric, meeting both curves with the same angle, which is a right angle for the circular and parabolic. We provide simple constructions for these ball-morphs and compare them to each other and other simple morphs (linear-interpolation, closest-projection, curvature-interpolation, Laplace-blending, and heat-propagation) using six cost measures (travel-distance, distortion, stretch, local acceleration, average squared mean curvature, and maximum squared mean curvature). The results depend heavily on the input curves. Nevertheless, we found that the linear ball-morph has consistently the shortest travel-distance and the circular ball-morph has the least amount of distortion.
Bhaumik, Basabi; Mathur, Mona
2003-01-01
We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.
2011-01-01
Background Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. Methods A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. Results The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Conclusions Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance. PMID:21324153
Experimental demonstration of time- and mode-division multiplexed passive optical network
NASA Astrophysics Data System (ADS)
Ren, Fang; Li, Juhao; Tang, Ruizhi; Hu, Tao; Yu, Jinyi; Mo, Qi; He, Yongqi; Chen, Zhangyuan; Li, Zhengbin
2017-07-01
A time- and mode-division multiplexed passive optical network (TMDM-PON) architecture is proposed, in which each optical network unit (ONU) communicates with the optical line terminal (OLT) independently utilizing both different time slots and switched optical linearly polarized (LP) spatial modes. Combination of a mode multiplexer/demultiplexer (MUX/DEUX) and a simple N × 1 optical switch is employed to select the specific LP mode in each ONU. A mode-insensitive power splitter is used for signal broadcast/combination between OLT and ONUs. We theoretically propose a dynamic mode and time slot assignment scheme for TMDM-PON based on inter-ONU priority rating, in which the time delay and packet loss ratio's variation tendency are investigated by simulation. Moreover, we experimentally demonstrate 2-mode TMDM-PON transmission over 10 km FMF with 10-Gb/s on-off keying (OOK) signal and direct detection.
Spectral reconstruction analysis for enhancing signal-to-noise in time-resolved spectroscopies
NASA Astrophysics Data System (ADS)
Wilhelm, Michael J.; Smith, Jonathan M.; Dai, Hai-Lung
2015-09-01
We demonstrate a new spectral analysis for the enhancement of the signal-to-noise ratio (SNR) in time-resolved spectroscopies. Unlike the simple linear average which produces a single representative spectrum with enhanced SNR, this Spectral Reconstruction analysis (SRa) improves the SNR (by a factor of ca. 0 . 6 √{ n } ) for all n experimentally recorded time-resolved spectra. SRa operates by eliminating noise in the temporal domain, thereby attenuating noise in the spectral domain, as follows: Temporal profiles at each measured frequency are fit to a generic mathematical function that best represents the temporal evolution; spectra at each time are then reconstructed with data points from the fitted profiles. The SRa method is validated with simulated control spectral data sets. Finally, we apply SRa to two distinct experimentally measured sets of time-resolved IR emission spectra: (1) UV photolysis of carbonyl cyanide and (2) UV photolysis of vinyl cyanide.
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
Modes and emergent time scales of embayed beach dynamics
NASA Astrophysics Data System (ADS)
Ratliff, Katherine M.; Murray, A. Brad
2014-10-01
In this study, we use a simple numerical model (the Coastline Evolution Model) to explore alongshore transport-driven shoreline dynamics within generalized embayed beaches (neglecting cross-shore effects). Using principal component analysis (PCA), we identify two primary orthogonal modes of shoreline behavior that describe shoreline variation about its unchanging mean position: the rotation mode, which has been previously identified and describes changes in the mean shoreline orientation, and a newly identified breathing mode, which represents changes in shoreline curvature. Wavelet analysis of the PCA mode time series reveals characteristic time scales of these modes (typically years to decades) that emerge within even a statistically constant white-noise wave climate (without changes in external forcing), suggesting that these time scales can arise from internal system dynamics. The time scales of both modes increase linearly with shoreface depth, suggesting that the embayed beach sediment transport dynamics exhibit a diffusive scaling.
Linear prediction data extrapolation superresolution radar imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing
1993-05-01
Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.
Optical recognition of statistical patterns
NASA Astrophysics Data System (ADS)
Lee, S. H.
1981-12-01
Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.
Optical recognition of statistical patterns
NASA Technical Reports Server (NTRS)
Lee, S. H.
1981-01-01
Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.
An unsteady lifting surface method for single rotation propellers
NASA Technical Reports Server (NTRS)
Williams, Marc H.
1990-01-01
The mathematical formulation of a lifting surface method for evaluating the steady and unsteady loads induced on single rotation propellers by blade vibration and inflow distortion is described. The scheme is based on 3-D linearized compressible aerodynamics and presumes that all disturbances are simple harmonic in time. This approximation leads to a direct linear integral relation between the normal velocity on the blade (which is determined from the blade geometry and motion) and the distribution of pressure difference across the blade. This linear relation is discretized by breaking the blade up into subareas (panels) on which the pressure difference is treated as approximately constant, and constraining the normal velocity at one (control) point on each panel. The piece-wise constant loads can then be determined by Gaussian elimination. The resulting blade loads can be used in performance, stability and forced response predictions for the rotor. Mathematical and numerical aspects of the method are examined. A selection of results obtained from the method is presented. The appendices include various details of the derivation that were felt to be secondary to the main development in Section 1.
NASA Astrophysics Data System (ADS)
Lin, Zhi; Wang, Yi; Xu, Bin; Xu, Huiying; Cai, Zhiping
2016-01-01
We report on a diode-end-pumped simultaneous multiple wavelength Nd:YVO4 laser. Dual-wavelength laser is achieved at a π-polarized 1064 nm emission line and a σ-polarized 1066 nm emission line with total maximum output power of 1.38 W. Moreover, tri-wavelength laser emission at the π-polarized 1064 nm emission line and σ-polarized 1062 and 1066 nm emission lines can also be obtained with total maximum output power of about 1.23 W, for the first time to our knowledge. The operation of such simultaneous dual- and tri-wavelength lasers is only realized by employing a simple glass etalon to modulate the intracavity losses for these potential lasing wavelengths inside of an intracavity polarizer, which therefore makes a very compact two-mirror linear cavity and simultaneous orthogonal lasing possible. Such orthogonal linearly polarized multi-wavelength laser sources could be especially promising in THz wave generation and in efficient nonlinear frequency conversion to visible lasers.
Pebdani, Arezou Amiri; Shabani, Ali Mohammad Haji; Dadfarnia, Shayessteh; Khodadoust, Saeid
2015-08-05
A simple solid phase microextraction method based on molecularly imprinted polymer sorbent in the hollow fiber (MIP-HF-SPME) combined with fiber optic-linear array spectrophotometer has been applied for the extraction and determination of diclofenac in environmental and biological samples. The effects of different parameters such as pH, times of extraction, type and volume of the organic solvent, stirring rate and donor phase volume on the extraction efficiency of the diclofenac were investigated and optimized. Under the optimal conditions, the calibration graph was linear (r(2)=0.998) in the range of 3.0-85.0 μg L(-1) with a detection limit of 0.7 μg L(-1) for preconcentration of 25.0 mL of the sample and the relative standard deviation (n=6) less than 5%. This method was applied successfully for the extraction and determination of diclofenac in different matrices (water, urine and plasma) and accuracy was examined through the recovery experiments. Copyright © 2015 Elsevier B.V. All rights reserved.
Linear thermal circulator based on Coriolis forces.
Li, Huanan; Kottos, Tsampikos
2015-02-01
We show that the presence of a Coriolis force in a rotating linear lattice imposes a nonreciprocal propagation of the phononic heat carriers. Using this effect we propose the concept of Coriolis linear thermal circulator which can control the circulation of a heat current. A simple model of three coupled harmonic masses on a rotating platform permits us to demonstrate giant circulating rectification effects for moderate values of the angular velocities of the platform.
Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J
2011-11-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.
Application of modern control theory to the design of optimum aircraft controllers
NASA Technical Reports Server (NTRS)
Power, L. J.
1973-01-01
The synthesis procedure presented is based on the solution of the output regulator problem of linear optimal control theory for time-invariant systems. By this technique, solution of the matrix Riccati equation leads to a constant linear feedback control law for an output regulator which will maintain a plant in a particular equilibrium condition in the presence of impulse disturbances. Two simple algorithms are presented that can be used in an automatic synthesis procedure for the design of maneuverable output regulators requiring only selected state variables for feedback. The first algorithm is for the construction of optimal feedforward control laws that can be superimposed upon a Kalman output regulator and that will drive the output of a plant to a desired constant value on command. The second algorithm is for the construction of optimal Luenberger observers that can be used to obtain feedback control laws for the output regulator requiring measurement of only part of the state vector. This algorithm constructs observers which have minimum response time under the constraint that the magnitude of the gains in the observer filter be less than some arbitrary limit.
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
A Linear Kernel for Co-Path/Cycle Packing
NASA Astrophysics Data System (ADS)
Chen, Zhi-Zhong; Fellows, Michael; Fu, Bin; Jiang, Haitao; Liu, Yang; Wang, Lusheng; Zhu, Binhai
Bounded-Degree Vertex Deletion is a fundamental problem in graph theory that has new applications in computational biology. In this paper, we address a special case of Bounded-Degree Vertex Deletion, the Co-Path/Cycle Packing problem, which asks to delete as few vertices as possible such that the graph of the remaining (residual) vertices is composed of disjoint paths and simple cycles. The problem falls into the well-known class of 'node-deletion problems with hereditary properties', is hence NP-complete and unlikely to admit a polynomial time approximation algorithm with approximation factor smaller than 2. In the framework of parameterized complexity, we present a kernelization algorithm that produces a kernel with at most 37k vertices, improving on the super-linear kernel of Fellows et al.'s general theorem for Bounded-Degree Vertex Deletion. Using this kernel,and the method of bounded search trees, we devise an FPT algorithm that runs in time O *(3.24 k ). On the negative side, we show that the problem is APX-hard and unlikely to have a kernel smaller than 2k by a reduction from Vertex Cover.
An evolutive real-time source inversion based on a linear inverse formulation
NASA Astrophysics Data System (ADS)
Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.
2016-12-01
Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.
Modeling of aircraft unsteady aerodynamic characteristics. Part 1: Postulated models
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Noderer, Keith D.
1994-01-01
A short theoretical study of aircraft aerodynamic model equations with unsteady effects is presented. The aerodynamic forces and moments are expressed in terms of indicial functions or internal state variables. The first representation leads to aircraft integro-differential equations of motion; the second preserves the state-space form of the model equations. The formulations of unsteady aerodynamics is applied in two examples. The first example deals with a one-degree-of-freedom harmonic motion about one of the aircraft body axes. In the second example, the equations for longitudinal short-period motion are developed. In these examples, only linear aerodynamic terms are considered. The indicial functions are postulated as simple exponentials and the internal state variables are governed by linear, time-invariant, first-order differential equations. It is shown that both approaches to the modeling of unsteady aerodynamics lead to identical models.
Generic approach to access barriers in dehydrogenation reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank
The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less
Ding, Aidong Adam; Hsieh, Jin-Jian; Wang, Weijing
2015-01-01
Bivariate survival analysis has wide applications. In the presence of covariates, most literature focuses on studying their effects on the marginal distributions. However covariates can also affect the association between the two variables. In this article we consider the latter issue by proposing a nonstandard local linear estimator for the concordance probability as a function of covariates. Under the Clayton copula, the conditional concordance probability has a simple one-to-one correspondence with the copula parameter for different data structures including those subject to independent or dependent censoring and dependent truncation. The proposed method can be used to study how covariates affect the Clayton association parameter without specifying marginal regression models. Asymptotic properties of the proposed estimators are derived and their finite-sample performances are examined via simulations. Finally, for illustration, we apply the proposed method to analyze a bone marrow transplant data set.
Nanoscale swimmers: hydrodynamic interactions and propulsion of molecular machines
NASA Astrophysics Data System (ADS)
Sakaue, T.; Kapral, R.; Mikhailov, A. S.
2010-06-01
Molecular machines execute nearly regular cyclic conformational changes as a result of ligand binding and product release. This cyclic conformational dynamics is generally non-reciprocal so that under time reversal a different sequence of machine conformations is visited. Since such changes occur in a solvent, coupling to solvent hydrodynamic modes will generally result in self-propulsion of the molecular machine. These effects are investigated for a class of coarse grained models of protein machines consisting of a set of beads interacting through pair-wise additive potentials. Hydrodynamic effects are incorporated through a configuration-dependent mobility tensor, and expressions for the propulsion linear and angular velocities, as well as the stall force, are obtained. In the limit where conformational changes are small so that linear response theory is applicable, it is shown that propulsion is exponentially small; thus, propulsion is nonlinear phenomenon. The results are illustrated by computations on a simple model molecular machine.
CHARACTERISTICS OF THERMOLUMINESCENCE LiF:Mg,Cu,Ag NANOPHOSPHOR.
Yahyaabadi, A; Torkzadeh, F; Rezaei-Ochbelagh, D
2018-04-23
A nanophosphor of LiF:Mg,Cu,Ag was prepared by planetary ball milling for the first time in the laboratory. The size and shape of the nanophosphor were confirmed by XRD and SEM, which showed that it was cubic in shape and ~53 nm in size. The thermoluminescence (TL) characteristics of this nanophosphor were then investigated. It was found that the optimum annealing condition was 250°C for 10 min. The TL sensitivity of the prepared nanopowder was less than that of its micropowder counterpart and the TL glow curve structure exhibited several peaks. The LiF:Mg,Cu,Ag nanophosphor exhibited a linear response over a range of doses from 1 Gy to ~10 kGy. From this study, it appears that LiF:Mg,Cu,Ag nanophosphor is a good candidate for dosimetry because of its linearity over a range of doses, low tendency to fade, good repeatability and simple glow curve structure.
Generic approach to access barriers in dehydrogenation reactions
Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank
2018-03-08
The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less
Quasi-one-dimensional arrangement of silver nanoparticles templated by cellulose microfibrils.
Wu, Min; Kuga, Shigenori; Huang, Yong
2008-09-16
We demonstrate a simple, facile approach to the deposition of silver nanoparticles on the surface of cellulose microfibrils with a quasi-one-dimensional arrangement. The process involves the generation of aldehyde groups by oxidizing the surface of cellulose microfibrils and then the assembly of silver nanoparticles on the surface by means of the silver mirror reaction. The linear nature of the microfibrils and the relatively uniform surface chemical modification result in a uniform linear distribution of silver particles along the microfibrils. The effects of various reaction parameters, such as the reaction time for the reduction process and employed starting materials, have been investigated by transmission electron microscopy (TEM) and ultraviolet-visible spectroscopy. Additionally, the products were examined for their electric current-voltage characteristics, the results showing that these materials had an electric conductivity of approximately 5 S/cm, being different from either the oxidated cellulose or bulk silver materials by many orders of magnitude.
Blocked Force and Loading Calculations for LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the performance of LaRC Thunder actuators under load and under blocked conditions. The problem is treated with the Von Karman non-linear analysis combined with a simple Raleigh-Ritz calculation. From this, shape and displacement under load combined with voltage are calculated. A method is found to calculate the blocked force vs voltage and spring force vs distance. It is found that under certain conditions, the blocked force and displacement is almost linear with voltage. It is also found that the spring force is multivalued and has at least one bifurcation point. This bifurcation point is where the device collapses under load and locks to a different bending solution. This occurs at a particular critical load. It is shown this other bending solution has a reduced amplitude and is proportional to the original amplitude times the square of the aspect ratio.
Pole-placement Predictive Functional Control for under-damped systems with real numbers algebra.
Zabet, K; Rossiter, J A; Haber, R; Abdullah, M
2017-11-01
This paper presents the new algorithm of PP-PFC (Pole-placement Predictive Functional Control) for stable, linear under-damped higher-order processes. It is shown that while conventional PFC aims to get first-order exponential behavior, this is not always straightforward with significant under-damped modes and hence a pole-placement PFC algorithm is proposed which can be tuned more precisely to achieve the desired dynamics, but exploits complex number algebra and linear combinations in order to deliver guarantees of stability and performance. Nevertheless, practical implementation is easier by avoiding complex number algebra and hence a modified formulation of the PP-PFC algorithm is also presented which utilises just real numbers while retaining the key attributes of simple algebra, coding and tuning. The potential advantages are demonstrated with numerical examples and real-time control of a laboratory plant. Copyright © 2017 ISA. All rights reserved.
Identifying fMRI Model Violations with Lagrange Multiplier Tests
Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor
2013-01-01
The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2012-01-01
The cerebellum has been implicated in processing motor errors required for online control of movement and motor learning. The dominant view is that Purkinje cell complex spike discharge signals motor errors. This study investigated whether errors are encoded in the simple spike discharge of Purkinje cells in monkeys trained to manually track a pseudo-randomly moving target. Four task error signals were evaluated based on cursor movement relative to target movement. Linear regression analyses based on firing residuals ensured that the modulation with a specific error parameter was independent of the other error parameters and kinematics. The results demonstrate that simple spike firing in lobules IV–VI is significantly correlated with position, distance and directional errors. Independent of the error signals, the same Purkinje cells encode kinematics. The strongest error modulation occurs at feedback timing. However, in 72% of cells at least one of the R2 temporal profiles resulting from regressing firing with individual errors exhibit two peak R2 values. For these bimodal profiles, the first peak is at a negative τ (lead) and a second peak at a positive τ (lag), implying that Purkinje cells encode both prediction and feedback about an error. For the majority of the bimodal profiles, the signs of the regression coefficients or preferred directions reverse at the times of the peaks. The sign reversal results in opposing simple spike modulation for the predictive and feedback components. Dual error representations may provide the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:23115173
''Math in a Can'': Teaching Mathematics and Engineering Design
ERIC Educational Resources Information Center
Narode, Ronald B.
2011-01-01
Using an apparently simple problem, ''Design a cylindrical can that will hold a liter of milk,'' this paper demonstrates how engineering design may facilitate the teaching of the following ideas to secondary students: linear and non-linear relationships; basic geometry of circles, rectangles, and cylinders; unit measures of area and volume;…
The Multifaceted Variable Approach: Selection of Method in Solving Simple Linear Equations
ERIC Educational Resources Information Center
Tahir, Salma; Cavanagh, Michael
2010-01-01
This paper presents a comparison of the solution strategies used by two groups of Year 8 students as they solved linear equations. The experimental group studied algebra following a multifaceted variable approach, while the comparison group used a traditional approach. Students in the experimental group employed different solution strategies,…
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
ERIC Educational Resources Information Center
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Fitting program for linear regressions according to Mahon (1996)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, Reto G.
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
Testing hypotheses for differences between linear regression lines
Stanley J. Zarnoch
2009-01-01
Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...
Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method
ERIC Educational Resources Information Center
Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev
2018-01-01
The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…
NASA Astrophysics Data System (ADS)
Motes, Keith R.; Olson, Jonathan P.; Rabeaux, Evan J.; Dowling, Jonathan P.; Olson, S. Jay; Rohde, Peter P.
2015-05-01
Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement has been thought to be resource intensive to create in the first place—typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feedforward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multiphoton walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer—fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection—is capable of significantly beating the shot-noise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.
Motes, Keith R; Olson, Jonathan P; Rabeaux, Evan J; Dowling, Jonathan P; Olson, S Jay; Rohde, Peter P
2015-05-01
Quantum number-path entanglement is a resource for supersensitive quantum metrology and in particular provides for sub-shot-noise or even Heisenberg-limited sensitivity. However, such number-path entanglement has been thought to be resource intensive to create in the first place--typically requiring either very strong nonlinearities, or nondeterministic preparation schemes with feedforward, which are difficult to implement. Very recently, arising from the study of quantum random walks with multiphoton walkers, as well as the study of the computational complexity of passive linear optical interferometers fed with single-photon inputs, it has been shown that such passive linear optical devices generate a superexponentially large amount of number-path entanglement. A logical question to ask is whether this entanglement may be exploited for quantum metrology. We answer that question here in the affirmative by showing that a simple, passive, linear-optical interferometer--fed with only uncorrelated, single-photon inputs, coupled with simple, single-mode, disjoint photodetection--is capable of significantly beating the shot-noise limit. Our result implies a pathway forward to practical quantum metrology with readily available technology.
Hassan, A K
2015-01-01
In this work, O/W emulsion sets were prepared by using different concentrations of two nonionic surfactants. The two surfactants, tween 80(HLB=15.0) and span 80(HLB=4.3) were used in a fixed proportions equal to 0.55:0.45 respectively. HLB value of the surfactants blends were fixed at 10.185. The surfactants blend concentration is starting from 3% up to 19%. For each O/W emulsion set the conductivity was measured at room temperature (25±2°), 40, 50, 60, 70 and 80°. Applying the simple linear regression least squares method statistical analysis to the temperature-conductivity obtained data determines the effective surfactants blend concentration required for preparing the most stable O/W emulsion. These results were confirmed by applying the physical stability centrifugation testing and the phase inversion temperature range measurements. The results indicated that, the relation which represents the most stable O/W emulsion has the strongest direct linear relationship between temperature and conductivity. This relationship is linear up to 80°. This work proves that, the most stable O/W emulsion is determined via the determination of the maximum R² value by applying of the simple linear regression least squares method to the temperature-conductivity obtained data up to 80°, in addition to, the true maximum slope is represented by the equation which has the maximum R² value. Because the conditions would be changed in a more complex formulation, the method of the determination of the effective surfactants blend concentration was verified by applying it for more complex formulations of 2% O/W miconazole nitrate cream and the results indicate its reproducibility.
Thomas, J.N.; Masci, F; Love, Jeffrey J.
2015-01-01
Several recently published reports have suggested that semi-stationary linear-cloud formations might be causally precursory to earthquakes. We examine the report of Guangmeng and Jie (2013), who claim to have predicted the 2012 M 6.0 earthquake in the Po Valley of northern Italy after seeing a satellite photograph (a digital image) showing a linear-cloud formation over the eastern Apennine Mountains of central Italy. From inspection of 4 years of satellite images we find numerous examples of linear-cloud formations over Italy. A simple test shows no obvious statistical relationship between the occurrence of these cloud formations and earthquakes that occurred in and around Italy. All of the linear-cloud formations we have identified in satellite images, including that which Guangmeng and Jie (2013) claim to have used to predict the 2012 earthquake, appear to be orographic – formed by the interaction of moisture-laden wind flowing over mountains. Guangmeng and Jie (2013) have not clearly stated how linear-cloud formations can be used to predict the size, location, and time of an earthquake, and they have not published an account of all of their predictions (including any unsuccessful predictions). We are skeptical of the validity of the claim by Guangmeng and Jie (2013) that they have managed to predict any earthquakes.
Exponents of non-linear clustering in scale-free one-dimensional cosmological simulations
NASA Astrophysics Data System (ADS)
Benhaiem, David; Joyce, Michael; Sicard, François
2013-03-01
One-dimensional versions of dissipationless cosmological N-body simulations have been shown to share many qualitative behaviours of the three-dimensional problem. Their interest lies in the fact that they can resolve a much greater range of time and length scales, and admit exact numerical integration. We use such models here to study how non-linear clustering depends on initial conditions and cosmology. More specifically, we consider a family of models which, like the three-dimensional Einstein-de Sitter (EdS) model, lead for power-law initial conditions to self-similar clustering characterized in the strongly non-linear regime by power-law behaviour of the two-point correlation function. We study how the corresponding exponent γ depends on the initial conditions, characterized by the exponent n of the power spectrum of initial fluctuations, and on a single parameter κ controlling the rate of expansion. The space of initial conditions/cosmology divides very clearly into two parts: (1) a region in which γ depends strongly on both n and κ and where it agrees very well with a simple generalization of the so-called stable clustering hypothesis in three dimensions; and (2) a region in which γ is more or less independent of both the spectrum and the expansion of the universe. The boundary in (n, κ) space dividing the `stable clustering' region from the `universal' region is very well approximated by a `critical' value of the predicted stable clustering exponent itself. We explain how this division of the (n, κ) space can be understood as a simple physical criterion which might indeed be expected to control the validity of the stable clustering hypothesis. We compare and contrast our findings to results in three dimensions, and discuss in particular the light they may throw on the question of `universality' of non-linear clustering in this context.
Can we detect a nonlinear response to temperature in European plant phenology?
Jochner, Susanne; Sparks, Tim H; Laube, Julia; Menzel, Annette
2016-10-01
Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C -1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ∼14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.
Quality control methods for linear accelerator radiation and mechanical axes alignment.
Létourneau, Daniel; Keller, Harald; Becker, Nathan; Amin, Md Nurul; Norrlinger, Bernhard; Jaffray, David A
2018-06-01
The delivery accuracy of highly conformal dose distributions generated using intensity modulation and collimator, gantry, and couch degrees of freedom is directly affected by the quality of the alignment between the radiation beam and the mechanical axes of a linear accelerator. For this purpose, quality control (QC) guidelines recommend a tolerance of ±1 mm for the coincidence of the radiation and mechanical isocenters. Traditional QC methods for assessment of radiation and mechanical axes alignment (based on pointer alignment) are time consuming and complex tasks that provide limited accuracy. In this work, an automated test suite based on an analytical model of the linear accelerator motions was developed to streamline the QC of radiation and mechanical axes alignment. The proposed method used the automated analysis of megavoltage images of two simple task-specific phantoms acquired at different linear accelerator settings to determine the coincidence of the radiation and mechanical isocenters. The sensitivity and accuracy of the test suite were validated by introducing actual misalignments on a linear accelerator between the radiation axis and the mechanical axes using both beam steering and mechanical adjustments of the gantry and couch. The validation demonstrated that the new QC method can detect sub-millimeter misalignment between the radiation axis and the three mechanical axes of rotation. A displacement of the radiation source of 0.2 mm using beam steering parameters was easily detectable with the proposed collimator rotation axis test. Mechanical misalignments of the gantry and couch rotation axes of the same magnitude (0.2 mm) were also detectable using the new gantry and couch rotation axis tests. For the couch rotation axis, the phantom and test design allow detection of both translational and tilt misalignments with the radiation beam axis. For the collimator rotation axis, the test can isolate the misalignment between the beam radiation axis and the mechanical collimator rotation axis from the impact of field size asymmetry. The test suite can be performed in a reasonable time (30-35 min) due to simple phantom setup, prescription-based beam delivery, and automated image analysis. As well, it provides a clear description of the relationship between axes. After testing the sensitivity of the test suite to beam steering and mechanical errors, the results of the test suite were used to reduce the misalignment errors of the linac to less than 0.7-mm radius for all axes. The proposed test suite offers sub-millimeter assessment of the coincidence of the radiation and mechanical isocenters and the test automation reduces complexity with improved efficiency. The test suite results can be used to optimize the linear accelerator's radiation to mechanical isocenter alignment by beam steering and mechanical adjustment of gantry and couch. © 2018 American Association of Physicists in Medicine.
Statistical characterization of the nonlinear noise in 2.8 Tbit/s PDM-16QAM CO-OFDM system.
Wang, Zhe; Qiao, Yaojun; Xu, Yanfei; Ji, Yuefeng
2013-07-29
We show for the first time through comprehensive simulations under both uncompensated transmission (UT) and dispersion managed transmission (DMT) systems that the statistical distribution of the nonlinear interference (NLI) within the polarization multiplexed 16-state quadrature amplitude modulation (PM-16QAM) Coherent Optical OFDM (CO-OFDM) system deviates from Gaussian distribution in the absence of amplified spontaneous emission (ASE) noise. We also observe that the dependences of the variance of the NLI noise on both the launch power and the transmission distance (logrithm) seem to be in a simple linear way.
1.34 µm picosecond self-mode-locked Nd:GdVO4 watt-level laser
NASA Astrophysics Data System (ADS)
Han, Ming; Peng, Jiying; Li, Zuohan; Cao, Qiuyuan; Yuan, Ruixia
2017-01-01
With a simple linear configuration, a diode-pumped, self-mode-locked Nd:GdVO4 laser at 1.34 µm is experimentally demonstrated for the first time. Based on the aberrationless theory of self-focusing and thermal lensing effect, through designing and optimizing the resonator, a pulse width as short as 9.1 ps is generated at a repetition rate of 2.0 GHz and the average output power is 2.51 W. The optical conversion efficiency and the slope efficiency for the stable mode-locked operation are approximately 16.7% and 19.2%, respectively.
The adaptive observer. [liapunov synthesis, single-input single-output, and reduced observers
NASA Technical Reports Server (NTRS)
Carroll, R. L.
1973-01-01
The simple generation of state from available measurements, for use in systems for which the criteria defining the acceptable state behavior mandates a control that is dependent upon unavailable measurement is described as an adaptive means for determining the state of a linear time invariant differential system having unknown parameters. A single input output adaptive observer and the reduced adaptive observer is developed. The basic ideas for both the adaptive observer and the nonadaptive observer are examined. A survey of the Liapunov synthesis technique is taken, and the technique is applied to adaptive algorithm for the adaptive observer.
Terahertz spectral detection of potassium sorbate in milk powder
NASA Astrophysics Data System (ADS)
Li, Pengpeng; Zhang, Yuan; Ge, Hongyi
2017-02-01
The spectral characteristics of potassium sorbate in milk powder in the range of 0.2 2.0 THz have been measured with THz time-domain spectroscopy(THz-TDS). Its absorption and refraction spectra are obtained at room temperature in the nitrogen atmosphere. The results showed that potassium sorbate at 0.98 THz obvious characteristic absorption peak. The simple linear regression(SLR) model was taken to analyze the content of potassium sorbate in milk powder. The results showed that the absorption coefficient increases as the mixture potassium sorbate increases. The research is important to food quality and safety testing.
Choo, Richard; Klotz, Laurence; Deboer, Gerrit; Danjoux, Cyril; Morton, Gerard C
2004-08-01
To assess the prostate specific antigen (PSA) doubling time of untreated, clinically localized, low-to-intermediate grade prostate carcinoma. A prospective single-arm cohort study has been in progress since November 1995 to assess the feasibility of a watchful-observation protocol with selective delayed intervention for clinically localized, low-to-intermediate grade prostate adenocarcinoma. The PSA doubling time was estimated from a linear regression of ln(PSA) against time, assuming a simple exponential growth model. As of March 2003, 231 patients had at least 6 months of follow-up (median 45) and at least three PSA measurements (median 8, range 3-21). The distribution of the doubling time was: < 2 years, 26 patients; 2-5 years, 65; 5-10 years, 42; 10-20 years, 26; 20-50 years, 16; >50 years, 56. The median doubling time was 7.0 years; 42% of men had a doubling time of >10 years. The doubling time of untreated clinically localized, low-to-intermediate grade prostate cancer varies widely.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
Inference on periodicity of circadian time series.
Costa, Maria J; Finkenstädt, Bärbel; Roche, Véronique; Lévi, Francis; Gould, Peter D; Foreman, Julia; Halliday, Karen; Hall, Anthony; Rand, David A
2013-09-01
Estimation of the period length of time-course data from cyclical biological processes, such as those driven by the circadian pacemaker, is crucial for inferring the properties of the biological clock found in many living organisms. We propose a methodology for period estimation based on spectrum resampling (SR) techniques. Simulation studies show that SR is superior and more robust to non-sinusoidal and noisy cycles than a currently used routine based on Fourier approximations. In addition, a simple fit to the oscillations using linear least squares is available, together with a non-parametric test for detecting changes in period length which allows for period estimates with different variances, as frequently encountered in practice. The proposed methods are motivated by and applied to various data examples from chronobiology.
Enhancements of Bayesian Blocks; Application to Large Light Curve Databases
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2015-01-01
Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).
Chicken barn climate and hazardous volatile compounds control using simple linear regression and PID
NASA Astrophysics Data System (ADS)
Abdullah, A. H.; Bakar, M. A. A.; Shukor, S. A. A.; Saad, F. S. A.; Kamis, M. S.; Mustafa, M. H.; Khalid, N. S.
2016-07-01
The hazardous volatile compounds from chicken manure in chicken barn are potentially to be a health threat to the farm animals and workers. Ammonia (NH3) and hydrogen sulphide (H2S) produced in chicken barn are influenced by climate changes. The Electronic Nose (e-nose) is used for the barn's air, temperature and humidity data sampling. Simple Linear Regression is used to identify the correlation between temperature-humidity, humidity-ammonia and ammonia-hydrogen sulphide. MATLAB Simulink software was used for the sample data analysis using PID controller. Results shows that the performance of PID controller using the Ziegler-Nichols technique can improve the system controller to control climate in chicken barn.
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
ERIC Educational Resources Information Center
Argoti, A.; Fan, L. T.; Cruz, J.; Chou, S. T.
2008-01-01
The stochastic simulation of chemical reactions, specifically, a simple reversible chemical reaction obeying the first-order, i.e., linear, rate law, has been presented by Martinez-Urreaga and his collaborators in this journal. The current contribution is intended to complement and augment their work in two aspects. First, the simple reversible…
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
An Anharmonic Solution to the Equation of Motion for the Simple Pendulum
ERIC Educational Resources Information Center
Johannessen, Kim
2011-01-01
An anharmonic solution to the differential equation describing the oscillations of a simple pendulum at large angles is discussed. The solution is expressed in terms of functions not involving the Jacobi elliptic functions. In the derivation, a sinusoidal expression, including a linear and a Fourier sine series in the argument, has been applied.…
The Double-Well Potential in Quantum Mechanics: A Simple, Numerically Exact Formulation
ERIC Educational Resources Information Center
Jelic, V.; Marsiglio, F.
2012-01-01
The double-well potential is arguably one of the most important potentials in quantum mechanics, because the solution contains the notion of a state as a linear superposition of "classical" states, a concept which has become very important in quantum information theory. It is therefore desirable to have solutions to simple double-well potentials…
A Simple and Effective Protein Folding Activity Suitable for Large Lectures
ERIC Educational Resources Information Center
White, Brian
2006-01-01
This article describes a simple and inexpensive hands-on simulation of protein folding suitable for use in large lecture classes. This activity uses a minimum of parts, tools, and skill to simulate some of the fundamental principles of protein folding. The major concepts targeted are that proteins begin as linear polypeptides and fold to…
Development of a rapid, simple assay of plasma total carotenoids
2012-01-01
Background Plasma total carotenoids can be used as an indicator of risk of chronic disease. Laboratory analysis of individual carotenoids by high performance liquid chromatography (HPLC) is time consuming, expensive, and not amenable to use beyond a research laboratory. The aim of this research is to establish a rapid, simple, and inexpensive spectrophotometric assay of plasma total carotenoids that has a very strong correlation with HPLC carotenoid profile analysis. Results Plasma total carotenoids from 29 volunteers ranged in concentration from 1.2 to 7.4 μM, as analyzed by HPLC. A linear correlation was found between the absorbance at 448 nm of an alcohol / heptane extract of the plasma and plasma total carotenoids analyzed by HPLC, with a Pearson correlation coefficient of 0.989. The average coefficient of variation for the spectrophotometric assay was 6.5% for the plasma samples. The limit of detection was about 0.3 μM and was linear up to about 34 μM without dilution. Correlations between the integrals of the absorption spectra in the range of carotenoid absorption and total plasma carotenoid concentration gave similar results to the absorbance correlation. Spectrophotometric assay results also agreed with the calculated expected absorbance based on published extinction coefficients for the individual carotenoids, with a Pearson correlation coefficient of 0.988. Conclusion The spectrophotometric assay of total carotenoids strongly correlated with HPLC analysis of carotenoids of the same plasma samples and expected absorbance values based on extinction coefficients. This rapid, simple, inexpensive assay, when coupled with the carotenoid health index, may be useful for nutrition intervention studies, population cohort studies, and public health interventions. PMID:23006902
Transport and time lag of chlorofluorocarbon gases in the unsaturated zone, Rabis Creek, Denmark
Engesgaard, Peter; Højberg, Anker L.; Hinsby, Klaus; Jensen, Karsten H.; Laier, Troels; Larsen, Flemming; Busenberg, Eurybiades; Plummer, Niel
2004-01-01
Transport of chlorofluorocarbon (CFC) gases through the unsaturated zone to the water table is affected by gas diffusion, air–water exchange (solubility), sorption to the soil matrix, advective–dispersive transport in the water phase, and, in some cases, anaerobic degradation. In deep unsaturated zones, this may lead to a time lag between entry of gases at the land surface and recharge to groundwater. Data from a Danish field site were used to investigate how time lag is affected by variations in water content and to explore the use of simple analytical solutions to calculate time lag. Numerical simulations demonstrate that either degradation or sorption of CFC-11 takes place, whereas CFC-12 and CFC-113 are nonreactive. Water flow did not appreciably affect transport. An analytical solution for the period with a linear increase in atmospheric CFC concentrations (approximately early 1970s to early 1990s) was used to calculate CFC profiles and time lags. We compared the analytical results with numerical simulations. The time lags in the 15-m-deep unsaturated zone increase from 4.2 to between 5.2 and 6.1 yr and from 3.4 to 3.9 yr for CFC-11 and CFC-12, respectively, when simulations change from use of an exponential to a linear increase in atmospheric concentrations. The CFC concentrations at the water table before the early 1990s can be estimated by displacing the atmospheric input function by these fixed time lags. A sensitivity study demonstrates conditions under which a time lag in the unsaturated zone becomes important. The most critical parameter is the tortuosity coefficient. The analytical approach is valid for the low range of tortuosity coefficients (τ = 0.1–0.4) and unsaturated zones greater than approximately 20 m in thickness. In these cases the CFC distribution may still be from either the exponential or linear phase. In other cases, the use of numerical models, as described in our work and elsewhere, is an option.
NASA Astrophysics Data System (ADS)
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.
Ghandeharioun, H; Rezaeitalab, F; Lotfi, R
2016-01-01
This study carefully evaluates the association of different respiration-related events to each other and to simple nocturnal features in obstructive sleep apnea-hypopnea syndrome (OSAS). The events include apneas, hypopneas, respiratory event-related arousals and snores. We conducted a statistical study on 158 adults who underwent polysomnography between July 2012 and May 2014. To monitor relevance, along with linear statistical strategies like analysis of variance and bootstrapping a correlation coefficient standard error, the non-linear method of mutual information is also applied to illuminate vague results of linear techniques. Based on normalized mutual information weights (NMIW), indices of apnea are 1.3 times more relevant to AHI values than those of hypopnea. NMIW for the number of blood oxygen desaturation below 95% is considerable (0.531). The next relevant feature is "respiratory arousals index" with NMIW of 0.501. Snore indices (0.314), and BMI (0.203) take the next place. Based on NMIW values, snoring events are nearly one-third (29.9%) more dependent to hypopneas than RERAs. 1. The more sever the OSAS is, the more frequently the apneic events happen. 2. The association of snore with hypopnea/RERA revealed which is routinely ignored in regression-based OSAS modeling. 3. The statistical dependencies of oximetry features potentially can lead to home-based screening of OSAS. 4. Poor ESS-AHI relevance in the database under study indicates its disability for the OSA diagnosis compared to oximetry. 5. Based on poor RERA-snore/ESS relevance, detailed history of the symptoms plus polysomnography is suggested for accurate diagnosis of RERAs. Copyright © 2015 Sociedade Portuguesa de Pneumologia. Published by Elsevier España, S.L.U. All rights reserved.
Chaos and Forecasting - Proceedings of the Royal Society Discussion Meeting
NASA Astrophysics Data System (ADS)
Tong, Howell
1995-04-01
The Table of Contents for the full book PDF is as follows: * Preface * Orthogonal Projection, Embedding Dimension and Sample Size in Chaotic Time Series from a Statistical Perspective * A Theory of Correlation Dimension for Stationary Time Series * On Prediction and Chaos in Stochastic Systems * Locally Optimized Prediction of Nonlinear Systems: Stochastic and Deterministic * A Poisson Distribution for the BDS Test Statistic for Independence in a Time Series * Chaos and Nonlinear Forecastability in Economics and Finance * Paradigm Change in Prediction * Predicting Nonuniform Chaotic Attractors in an Enzyme Reaction * Chaos in Geophysical Fluids * Chaotic Modulation of the Solar Cycle * Fractal Nature in Earthquake Phenomena and its Simple Models * Singular Vectors and the Predictability of Weather and Climate * Prediction as a Criterion for Classifying Natural Time Series * Measuring and Characterising Spatial Patterns, Dynamics and Chaos in Spatially-Extended Dynamical Systems and Ecologies * Non-Linear Forecasting and Chaos in Ecology and Epidemiology: Measles as a Case Study
Simulation of linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.
1993-01-01
A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.
Relating Time-Dependent Acceleration and Height Using an Elevator
NASA Astrophysics Data System (ADS)
Kinser, Jason M.
2015-04-01
A simple experiment in relating a time-dependent linear acceleration function to height is explored through the use of a smartphone and an elevator. Given acceleration as a function of time1, a(t), the velocity function and position functions are determined through integration as in v (t ) =∫ a (t ) d t (1) and x (t ) =∫ v (t ) dt. Mobile devices such as smartphones or tablets have accelerometers that capture slowly evolving acceleration with respect to time and can deliver those measurements as a CSV file. A recent example measured the oscillations of the elevator as it starts its motion.2 In the application presented here the mobile device is used to estimate the height of the elevator ride. By estimating the functional form of the acceleration of an elevator ride, it is possible to estimate the height of the ride through Eqs. (1) and (2).
Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X
2014-03-01
Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1980-01-01
A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.
Critical Fluctuations in Cortical Models Near Instability
Aburn, Matthew J.; Holmes, C. A.; Roberts, James A.; Boonstra, Tjeerd W.; Breakspear, Michael
2012-01-01
Computational studies often proceed from the premise that cortical dynamics operate in a linearly stable domain, where fluctuations dissipate quickly and show only short memory. Studies of human electroencephalography (EEG), however, have shown significant autocorrelation at time lags on the scale of minutes, indicating the need to consider regimes where non-linearities influence the dynamics. Statistical properties such as increased autocorrelation length, increased variance, power law scaling, and bistable switching have been suggested as generic indicators of the approach to bifurcation in non-linear dynamical systems. We study temporal fluctuations in a widely-employed computational model (the Jansen–Rit model) of cortical activity, examining the statistical signatures that accompany bifurcations. Approaching supercritical Hopf bifurcations through tuning of the background excitatory input, we find a dramatic increase in the autocorrelation length that depends sensitively on the direction in phase space of the input fluctuations and hence on which neuronal subpopulation is stochastically perturbed. Similar dependence on the input direction is found in the distribution of fluctuation size and duration, which show power law scaling that extends over four orders of magnitude at the Hopf bifurcation. We conjecture that the alignment in phase space between the input noise vector and the center manifold of the Hopf bifurcation is directly linked to these changes. These results are consistent with the possibility of statistical indicators of linear instability being detectable in real EEG time series. However, even in a simple cortical model, we find that these indicators may not necessarily be visible even when bifurcations are present because their expression can depend sensitively on the neuronal pathway of incoming fluctuations. PMID:22952464
Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh
2012-03-22
The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.
2012-08-01
This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gandhi, P.; Dhillon, V. S.; Durant, M.
2010-07-15
In a fast multi-wavelength timing study of black hole X-ray binaries (BHBs), we have discovered correlated optical and X-ray variability in the low/hard state of two sources: GX 339-4 and SWIFT J1753.5-0127. After XTE J1118+480, these are the only BHBs currently known to show rapid (sub-second) aperiodic optical flickering. Our simultaneous VLT/Ultracam and RXTE data reveal intriguing patterns with characteristic peaks, dips and lags down to very short timescales. Simple linear reprocessing models can be ruled out as the origin of the rapid, aperiodic optical power in both sources. A magnetic energy release model with fast interactions between the disk,more » jet and corona can explain the complex correlation patterns. We also show that in both the optical and X-ray light curves, the absolute source variability r.m.s. amplitude linearly increases with flux, and that the flares have a log-normal distribution. The implication is that variability at both wavelengths is not due to local fluctuations alone, but rather arises as a result of coupling of perturbations over a wide range of radii and timescales. These 'optical and X-ray rms-flux relations' thus provide new constraints to connect the outer and inner parts of the accretion flow, and the jet.« less
Rapid iterative reanalysis for automated design
NASA Technical Reports Server (NTRS)
Bhatia, K. G.
1973-01-01
A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.
Simple Expressions for the Design of Linear Tapers in Overmoded Corrugated Waveguides
Schaub, S. C.; Shapiro, M. A.; Temkin, R. J.
2015-08-16
In this paper, simple analytical formulae are presented for the design of linear tapers with very low mode conversion loss in overmoded corrugated waveguides. For tapers from waveguide radius a2 to a1, with a11a 2/λ. Here, λ is the wavelength of radiation. The fractional loss of the HE 11 mode in an optimized taper is 0.0293(a 2-a 1) 4/amore » $$2\\atop{1}$$1a$$2\\atop{2}$$. These formulae are accurate when a2≲2a 1. Slightly more complex formulae, accurate for a 2≤4a 1, are also presented in this paper. The loss in an overmoded corrugated linear taper is less than 1 % when a 2≤2.12a 1 and less than 0.1 % when a 2≤1.53a 1. The present analytic results have been benchmarked against a rigorous mode matching code and have been found to be very accurate. The results for linear tapers are compared with the analogous expressions for parabolic tapers. Finally, parabolic tapers may provide lower loss, but linear tapers with moderate values of a 2/a 1 may be attractive because of their simplicity of fabrication.« less
1985-10-01
characteristic of a p-n junction to provide exponential linearization in a simple, thermally-stable, wide band circuit. RESME Les oscillateurs A...exponentielle (fr6quence/tension) que V’on 1 retrouve chez plusieurs oscillateurs . Ce circuit, d’une grande largeur de bande, utilise la caractfiristique
The Pendulum: A Paradigm for the Linear Oscillator
ERIC Educational Resources Information Center
Newburgh, Ronald
2004-01-01
The simple pendulum is a model for the linear oscillator. The usual mathematical treatment of the problem begins with a differential equation that one solves with the techniques of the differential calculus, a formal process that tends to obscure the physics. In this paper we begin with a kinematic description of the motion obtained by experiment…
NASA Astrophysics Data System (ADS)
Domínguez, Efraín; Angarita, Hector; Rosmann, Thomas; Mendez, Zulma; Angulo, Gustavo
2013-04-01
A viable quantitative hydrological forecasting service is a combination of technological elements, personnel and knowledge, working together to establish a stable operational cycle of forecasts emission, dissemination and assimilation; hence, the process for establishing such system usually requires significant resources and time to reach an adequate development and integration in order to produce forecasts with acceptable levels of performance. Here are presented the results of this process for the recently implemented Operational Forecast Service for the Betania's Hydropower Reservoir - or SPHEB, located at the Upper-Magdalena River Basin (Colombia). The current scope of the SPHEB includes forecasting of water levels and discharge for the three main streams affluent to the reservoir, for lead times between +1 to +57 hours, and +1 to +10 days. The core of the SPHEB is the Flexible, Adaptive, Simple and Transient Time forecasting approach, namely FAST-T. This comprises of a set of data structures, mathematical kernel, distributed computing and network infrastructure designed to provide seamless real-time operational forecast and automatic model adjustment in case of failures in data transmission or assimilation. Among FAST-T main features are: an autonomous evaluation and detection of the most relevant information for the later configuration of forecasting models; an adaptively linearized mathematical kernel, the optimal adaptive linear combination or OALC, which provides a computationally simple and efficient algorithm for real-time applications; and finally, a meta-model catalog, containing prioritized forecast models at given stream conditions. The SPHEB is at present feed by the fraction of hydrological monitoring network installed at the basin that has telemetric capabilities via NOAA-GOES satellites (8 stages, approximately 47%) with data availability of about a 90% at one hour intervals. However, there is a dense network of 'conventional' hydro-meteorological stages -read manually once or twice per day - that, despite not ideal in the context of real-time system, improve model performance significantly, and therefore are entered into the system by manual input. At its current configuration, the SPHEB performance objectives are fulfilled for 90% of the forecasts with lead times up to +2 days and +15 hours (using the predictability criteria of the Russian Hydrometeorological Center S/?Δ) and the average accuracy is in the range 70-99% ( r2 criteria). However, longer lead times are at present not satisfactory in terms of forecasts accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubrovsky, V. G.; Topovsky, A. V.
New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less
Interrelations between random walks on diagrams (graphs) with and without cycles.
Hill, T L
1988-05-01
Three topics are discussed. A discrete-state, continuous-time random walk with one or more absorption states can be studied by a presumably new method: some mean properties, including the mean time to absorption, can be found from a modified diagram (graph) in which each absorption state is replaced by a one-way cycle back to the starting state. The second problem is a random walk on a diagram (graph) with cycles. The walk terminates on completion of the first cycle. This walk can be replaced by an equivalent walk on a modified diagram with absorption. This absorption diagram can in turn be replaced by another modified diagram with one-way cycles back to the starting state, just as in the first problem. The third problem, important in biophysics, relates to a long-time continuous walk on a diagram with cycles. This diagram can be transformed (in two steps) to a modified, more-detailed, diagram with one-way cycles only. Thus, the one-way cycle fluxes of the original diagram can be found from the state probabilities of the modified diagram. These probabilities can themselves be obtained by simple matrix inversion (the probabilities are determined by linear algebraic steady-state equations). Thus, a simple method is now available to find one-way cycle fluxes exactly (previously Monte Carlo simulation was required to find these fluxes, with attendant fluctuations, for diagrams of any complexity). An incidental benefit of the above procedure is that it provides a simple proof of the one-way cycle flux relation Jn +/- = IIn +/- sigma n/sigma, where n is any cycle of the original diagram.
Regression analysis of sparse asynchronous longitudinal data
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P.
2015-01-01
Summary We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus. PMID:26568699
Pattern formation in individual-based systems with time-varying parameters
NASA Astrophysics Data System (ADS)
Ashcroft, Peter; Galla, Tobias
2013-12-01
We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.
Pistonesi, Marcelo F; Di Nezio, María S; Centurión, María E; Lista, Adriana G; Fragoso, Wallace D; Pontes, Márcio J C; Araújo, Mário C U; Band, Beatriz S Fernández
2010-12-15
In this study, a novel, simple, and efficient spectrofluorimetric method to determine directly and simultaneously five phenolic compounds (hydroquinone, resorcinol, phenol, m-cresol and p-cresol) in air samples is presented. For this purpose, variable selection by the successive projections algorithm (SPA) is used in order to obtain simple multiple linear regression (MLR) models based on a small subset of wavelengths. For comparison, partial least square (PLS) regression is also employed in full-spectrum. The concentrations of the calibration matrix ranged from 0.02 to 0.2 mg L(-1) for hydroquinone, from 0.05 to 0.6 mg L(-1) for resorcinol, and from 0.05 to 0.4 mg L(-1) for phenol, m-cresol and p-cresol; incidentally, such ranges are in accordance with the Argentinean environmental legislation. To verify the accuracy of the proposed method a recovery study on real air samples of smoking environment was carried out with satisfactory results (94-104%). The advantage of the proposed method is that it requires only spectrofluorimetric measurements of samples and chemometric modeling for simultaneous determination of five phenols. With it, air is simply sampled and no pre-treatment sample is needed (i.e., separation steps and derivatization reagents are avoided) that means a great saving of time. Copyright © 2010 Elsevier B.V. All rights reserved.