Valuation of financial models with non-linear state spaces
NASA Astrophysics Data System (ADS)
Webber, Nick
2001-02-01
A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien
2012-01-01
Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.
A simple method for identifying parameter correlations in partially observed linear dynamic models.
Li, Pu; Vu, Quoc Dong
2015-12-14
Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.
Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H
2017-12-27
In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.
Cosmological N -body simulations with generic hot dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk
2017-10-01
We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.
Cosmological N-body simulations with generic hot dark matter
NASA Astrophysics Data System (ADS)
Brandbyge, Jacob; Hannestad, Steen
2017-10-01
We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.
Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.
2000-01-01
PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.
Linear analysis of auto-organization in Hebbian neural networks.
Carlos Letelier, J; Mpodozis, J
1995-01-01
The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.
Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.
Trninić, Marko; Jeličić, Mario; Papić, Vladan
2015-07-01
In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.
Ding, Junjie; Wang, Yi; Lin, Weiwei; Wang, Changlian; Zhao, Limei; Li, Xingang; Zhao, Zhigang; Miao, Liyan; Jiao, Zheng
2015-03-01
Valproic acid (VPA) follows a non-linear pharmacokinetic profile in terms of protein-binding saturation. The total daily dose regarding VPA clearance is a simple power function, which may partially explain the non-linearity of the pharmacokinetic profile; however, it may be confounded by the therapeutic drug monitoring effect. The aim of this study was to develop a population pharmacokinetic model for VPA based on protein-binding saturation in pediatric patients with epilepsy. A total of 1,107 VPA serum trough concentrations at steady state were collected from 902 epileptic pediatric patients aged from 3 weeks to 14 years at three hospitals. The population pharmacokinetic model was developed using NONMEM(®) software. The ability of three candidate models (the simple power exponent model, the dose-dependent maximum effect [DDE] model, and the protein-binding model) to describe the non-linear pharmacokinetic profile of VPA was investigated, and potential covariates were screened using a stepwise approach. Bootstrap, normalized prediction distribution errors and external evaluations from two independent studies were performed to determine the stability and predictive performance of the candidate models. The age-dependent exponent model described the effects of body weight and age on the clearance well. Co-medication with carbamazepine was identified as a significant covariate. The DDE model best fitted the aim of this study, although there were no obvious differences in the predictive performances. The condition number was less than 500, and the precision of the parameter estimates was less than 30 %, indicating stability and validity of the final model. The DDE model successfully described the non-linear pharmacokinetics of VPA. Furthermore, the proposed population pharmacokinetic model of VPA can be used to design rational dosage regimens to achieve desirable serum concentrations.
Nonlinear multiplicative dendritic integration in neuron and network models
Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si
2013-01-01
Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543
Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices
NASA Astrophysics Data System (ADS)
Passemier, Damien; McKay, Matthew R.; Chen, Yang
2015-07-01
Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Parameter and Structure Inference for Nonlinear Dynamical Systems
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Millonas, Mark
2006-01-01
A great many systems can be modeled in the non-linear dynamical systems framework, as x = f(x) + xi(t), where f() is the potential function for the system, and xi is the excitation noise. Modeling the potential using a set of basis functions, we derive the posterior for the basis coefficients. A more challenging problem is to determine the set of basis functions that are required to model a particular system. We show that using the Bayesian Information Criteria (BIC) to rank models, and the beam search technique, that we can accurately determine the structure of simple non-linear dynamical system models, and the structure of the coupling between non-linear dynamical systems where the individual systems are known. This last case has important ecological applications.
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
The halo model in a massive neutrino cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massara, Elena; Villaescusa-Navarro, Francisco; Viel, Matteo, E-mail: emassara@sissa.it, E-mail: villaescusa@oats.inaf.it, E-mail: viel@oats.inaf.it
2014-12-01
We provide a quantitative analysis of the halo model in the context of massive neutrino cosmologies. We discuss all the ingredients necessary to model the non-linear matter and cold dark matter power spectra and compare with the results of N-body simulations that incorporate massive neutrinos. Our neutrino halo model is able to capture the non-linear behavior of matter clustering with a ∼20% accuracy up to very non-linear scales of k = 10 h/Mpc (which would be affected by baryon physics). The largest discrepancies arise in the range k = 0.5 – 1 h/Mpc where the 1-halo and 2-halo terms are comparable and are present also inmore » a massless neutrino cosmology. However, at scales k < 0.2 h/Mpc our neutrino halo model agrees with the results of N-body simulations at the level of 8% for total neutrino masses of < 0.3 eV. We also model the neutrino non-linear density field as a sum of a linear and clustered component and predict the neutrino power spectrum and the cold dark matter-neutrino cross-power spectrum up to k = 1 h/Mpc with ∼30% accuracy. For masses below 0.15 eV the neutrino halo model captures the neutrino induced suppression, casted in terms of matter power ratios between massive and massless scenarios, with a 2% agreement with the results of N-body/neutrino simulations. Finally, we provide a simple application of the halo model: the computation of the clustering of galaxies, in massless and massive neutrinos cosmologies, using a simple Halo Occupation Distribution scheme and our halo model extension.« less
SNDR enhancement in noisy sinusoidal signals by non-linear processing elements
NASA Astrophysics Data System (ADS)
Martorell, Ferran; McDonnell, Mark D.; Abbott, Derek; Rubio, Antonio
2007-06-01
We investigate the possibility of building linear amplifiers capable of enhancing the Signal-to-Noise and Distortion Ratio (SNDR) of sinusoidal input signals using simple non-linear elements. Other works have proven that it is possible to enhance the Signal-to-Noise Ratio (SNR) by using limiters. In this work we study a soft limiter non-linear element with and without hysteresis. We show that the SNDR of sinusoidal signals can be enhanced by 0.94 dB using a wideband soft limiter and up to 9.68 dB using a wideband soft limiter with hysteresis. These results indicate that linear amplifiers could be constructed using non-linear circuits with hysteresis. This paper presents mathematical descriptions for the non-linear elements using statistical parameters. Using these models, the input-output SNDR enhancement is obtained by optimizing the non-linear transfer function parameters to maximize the output SNDR.
A linear model fails to predict orientation selectivity of cells in the cat visual cortex.
Volgushev, M; Vidyasagar, T R; Pei, X
1996-01-01
1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828
An evaluation of bias in propensity score-adjusted non-linear regression models.
Wan, Fei; Mitra, Nandita
2018-03-01
Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.
Multiphysics modeling of non-linear laser-matter interactions for optically active semiconductors
NASA Astrophysics Data System (ADS)
Kraczek, Brent; Kanp, Jaroslaw
Development of photonic devices for sensors and communications devices has been significantly enhanced by computational modeling. We present a new computational method for modelling laser propagation in optically-active semiconductors within the paraxial wave approximation (PWA). Light propagation is modeled using the Streamline-upwind/Petrov-Galerkin finite element method (FEM). Material response enters through the non-linear polarization, which serves as the right-hand side of the FEM calculation. Maxwell's equations for classical light propagation within the PWA can be written solely in terms of the electric field, producing a wave equation that is a form of the advection-diffusion-reaction equations (ADREs). This allows adaptation of the computational machinery developed for solving ADREs in fluid dynamics to light-propagation modeling. The non-linear polarization is incorporated using a flexible framework to enable the use of multiple methods for carrier-carrier interactions (e.g. relaxation-time-based or Monte Carlo) to enter through the non-linear polarization, as appropriate to the material type. We demonstrate using a simple carrier-carrier model approximating the response of GaN. Supported by ARL Materials Enterprise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Analytis, G.T.
1995-09-01
A non-linear one-group space-dependent neutronic model for a finite one-dimensional core is coupled with a simple BWR feed-back model. In agreement with results obtained by the authors who originally developed the point-kinetics version of this model, we shall show numerically that stochastic reactivity excitations may result in limit-cycles and eventually in a chaotic behaviour, depending on the magnitude of the feed-back coefficient K. In the framework of this simple space-dependent model, the effect of the non-linearities on the different spatial harmonics is studied and the importance of the space-dependent effects is exemplified and assessed in terms of the importance ofmore » the higher harmonics. It is shown that under certain conditions, when the limit-cycle-type develop, the neutron spectra may exhibit strong space-dependent effects.« less
NASA Astrophysics Data System (ADS)
Deymier, P. A.; Runge, K.
2018-03-01
A Green's function-based numerical method is developed to calculate the phase of scattered elastic waves in a harmonic model of diatomic molecules adsorbed on the (001) surface of a simple cubic crystal. The phase properties of scattered waves depend on the configuration of the molecules. The configurations of adsorbed molecules on the crystal surface such as parallel chain-like arrays coupled via kinks are used to demonstrate not only linear but also non-linear dependency of the phase on the number of kinks along the chains. Non-linear behavior arises for scattered waves with frequencies in the vicinity of a diatomic molecule resonance. In the non-linear regime, the variation in phase with the number of kinks is formulated mathematically as unitary matrix operations leading to an analogy between phase-based elastic unitary operations and quantum gates. The advantage of elastic based unitary operations is that they are easily realizable physically and measurable.
Tewarie, P.; Bright, M.G.; Hillebrand, A.; Robson, S.E.; Gascoyne, L.E.; Morris, P.G.; Meier, J.; Van Mieghem, P.; Brookes, M.J.
2016-01-01
Understanding the electrophysiological basis of resting state networks (RSNs) in the human brain is a critical step towards elucidating how inter-areal connectivity supports healthy brain function. In recent years, the relationship between RSNs (typically measured using haemodynamic signals) and electrophysiology has been explored using functional Magnetic Resonance Imaging (fMRI) and magnetoencephalography (MEG). Significant progress has been made, with similar spatial structure observable in both modalities. However, there is a pressing need to understand this relationship beyond simple visual similarity of RSN patterns. Here, we introduce a mathematical model to predict fMRI-based RSNs using MEG. Our unique model, based upon a multivariate Taylor series, incorporates both phase and amplitude based MEG connectivity metrics, as well as linear and non-linear interactions within and between neural oscillations measured in multiple frequency bands. We show that including non-linear interactions, multiple frequency bands and cross-frequency terms significantly improves fMRI network prediction. This shows that fMRI connectivity is not only the result of direct electrophysiological connections, but is also driven by the overlap of connectivity profiles between separate regions. Our results indicate that a complete understanding of the electrophysiological basis of RSNs goes beyond simple frequency-specific analysis, and further exploration of non-linear and cross-frequency interactions will shed new light on distributed network connectivity, and its perturbation in pathology. PMID:26827811
Non-linear modelling and control of semi-active suspensions with variable damping
NASA Astrophysics Data System (ADS)
Chen, Huang; Long, Chen; Yuan, Chao-Chun; Jiang, Hao-Bin
2013-10-01
Electro-hydraulic dampers can provide variable damping force that is modulated by varying the command current; furthermore, they offer advantages such as lower power, rapid response, lower cost, and simple hardware. However, accurate characterisation of non-linear f-v properties in pre-yield and force saturation in post-yield is still required. Meanwhile, traditional linear or quarter vehicle models contain various non-linearities. The development of a multi-body dynamics model is very complex, and therefore, SIMPACK was used with suitable improvements for model development and numerical simulations. A semi-active suspension was built based on a belief-desire-intention (BDI)-agent model framework. Vehicle handling dynamics were analysed, and a co-simulation analysis was conducted in SIMPACK and MATLAB to evaluate the BDI-agent controller. The design effectively improved ride comfort, handling stability, and driving safety. A rapid control prototype was built based on dSPACE to conduct a real vehicle test. The test and simulation results were consistent, which verified the simulation.
Induction of Chromosomal Aberrations at Fluences of Less Than One HZE Particle per Cell Nucleus
NASA Technical Reports Server (NTRS)
Hada, Megumi; Chappell, Lori J.; Wang, Minli; George, Kerry A.; Cucinotta, Francis A.
2014-01-01
The assumption of a linear dose response used to describe the biological effects of high LET radiation is fundamental in radiation protection methodologies. We investigated the dose response for chromosomal aberrations for exposures corresponding to less than one particle traversal per cell nucleus by high energy and charge (HZE) nuclei. Human fibroblast and lymphocyte cells where irradiated with several low doses of <0.1 Gy, and several higher doses of up to 1 Gy with O (77 keV/ (long-s)m), Si (99 keV/ (long-s)m), Fe (175 keV/ (long-s)m), Fe (195 keV/ (long-s)m) or Fe (240 keV/ (long-s)m) particles. Chromosomal aberrations at first mitosis were scored using fluorescence in situ hybridization (FISH) with chromosome specific paints for chromosomes 1, 2 and 4 and DAPI staining of background chromosomes. Non-linear regression models were used to evaluate possible linear and non-linear dose response models based on these data. Dose responses for simple exchanges for human fibroblast irradiated under confluent culture conditions were best fit by non-linear models motivated by a non-targeted effect (NTE). Best fits for the dose response data for human lymphocytes irradiated in blood tubes were a NTE model for O and a linear response model fit best for Si and Fe particles. Additional evidence for NTE were found in low dose experiments measuring gamma-H2AX foci, a marker of double strand breaks (DSB), and split-dose experiments with human fibroblasts. Our results suggest that simple exchanges in normal human fibroblasts have an important NTE contribution at low particle fluence. The current and prior experimental studies provide important evidence against the linear dose response assumption used in radiation protection for HZE particles and other high LET radiation at the relevant range of low doses.
Ridge Regression for Interactive Models.
ERIC Educational Resources Information Center
Tate, Richard L.
1988-01-01
An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are…
How does non-linear dynamics affect the baryon acoustic oscillation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugiyama, Naonori S.; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: dns@astro.princeton.edu
2014-02-01
We study the non-linear behavior of the baryon acoustic oscillation in the power spectrum and the correlation function by decomposing the dark matter perturbations into the short- and long-wavelength modes. The evolution of the dark matter fluctuations can be described as a global coordinate transformation caused by the long-wavelength displacement vector acting on short-wavelength matter perturbation undergoing non-linear growth. Using this feature, we investigate the well known cancellation of the high-k solutions in the standard perturbation theory. While the standard perturbation theory naturally satisfies the cancellation of the high-k solutions, some of the recently proposed improved perturbation theories do notmore » guarantee the cancellation. We show that this cancellation clarifies the success of the standard perturbation theory at the 2-loop order in describing the amplitude of the non-linear power spectrum even at high-k regions. We propose an extension of the standard 2-loop level perturbation theory model of the non-linear power spectrum that more accurately models the non-linear evolution of the baryon acoustic oscillation than the standard perturbation theory. The model consists of simple and intuitive parts: the non-linear evolution of the smoothed power spectrum without the baryon acoustic oscillations and the non-linear evolution of the baryon acoustic oscillations due to the large-scale velocity of dark matter and due to the gravitational attraction between dark matter particles. Our extended model predicts the smoothing parameter of the baryon acoustic oscillation peak at z = 0.35 as ∼ 7.7Mpc/h and describes the small non-linear shift in the peak position due to the galaxy random motions.« less
Bhaumik, Basabi; Mathur, Mona
2003-01-01
We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.
Assessing the performance of eight real-time updating models and procedures for the Brosna River
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.
2005-10-01
The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Simple estimation of linear 1+1 D tsunami run-up
NASA Astrophysics Data System (ADS)
Fuentes, M.; Campos, J. A.; Riquelme, S.
2016-12-01
An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.
Theory of advection-driven long range biotic transport
USDA-ARS?s Scientific Manuscript database
We propose a simple mechanistic model to examine the effects of advective flow on the spread of fungal diseases spread by wind-blown spores. The model is defined by a set of two coupled non-linear partial differential equations for spore densities. One equation describes the long-distance advectiv...
Simple analytical model of a thermal diode
NASA Astrophysics Data System (ADS)
Kaushik, Saurabh; Kaushik, Sachin; Marathe, Rahul
2018-05-01
Recently there is a lot of attention given to manipulation of heat by constructing thermal devices such as thermal diodes, transistors and logic gates. Many of the models proposed have an asymmetry which leads to the desired effect. Presence of non-linear interactions among the particles is also essential. But, such models lack analytical understanding. Here we propose a simple, analytically solvable model of a thermal diode. Our model consists of classical spins in contact with multiple heat baths and constant external magnetic fields. Interestingly the magnetic field is the only parameter required to get the effect of heat rectification.
NASA Astrophysics Data System (ADS)
Tahani, Masoud; Askari, Amir R.
2014-09-01
In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Design of Linear Control System for Wind Turbine Blade Fatigue Testing
NASA Astrophysics Data System (ADS)
Toft, Anders; Roe-Poulsen, Bjarke; Christiansen, Rasmus; Knudsen, Torben
2016-09-01
This paper proposes a linear method for wind turbine blade fatigue testing at Siemens Wind Power. The setup consists of a blade, an actuator (motor and load mass) that acts on the blade with a sinusoidal moment, and a distribution of strain gauges to measure the blade flexure. Based on the frequency of the sinusoidal input, the blade will start oscillating with a given gain, hence the objective of the fatigue test is to make the blade oscillate with a controlled amplitude. The system currently in use is based on frequency control, which involves some non-linearities that make the system difficult to control. To make a linear controller, a different approach has been chosen, namely making a controller which is not regulating on the input frequency, but on the input amplitude. A non-linear mechanical model for the blade and the motor has been constructed. This model has been simplified based on the desired output, namely the amplitude of the blade. Furthermore, the model has been linearised to make it suitable for linear analysis and control design methods. The controller is designed based on a simplified and linearised model, and its gain parameter determined using pole placement. The model variants have been simulated in the MATLAB toolbox Simulink, which shows that the controller design based on the simple model performs adequately with the non-linear model. Moreover, the developed controller solves the robustness issue found in the existent solution and also reduces the needed energy for actuation as it always operates at the blade eigenfrequency.
A single-degree-of-freedom model for non-linear soil amplification
Erdik, Mustafa Ozder
1979-01-01
For proper understanding of soil behavior during earthquakes and assessment of a realistic surface motion, studies of the large-strain dynamic response of non-linear hysteretic soil systems are indispensable. Most of the presently available studies are based on the assumption that the response of a soil deposit is mainly due to the upward propagation of horizontally polarized shear waves from the underlying bedrock. Equivalent-linear procedures, currently in common use in non-linear soil response analysis, provide a simple approach and have been favorably compared with the actual recorded motions in some particular cases. Strain compatibility in these equivalent-linear approaches is maintained by selecting values of shear moduli and damping ratios in accordance with the average soil strains, in an iterative manner. Truly non-linear constitutive models with complete strain compatibility have also been employed. The equivalent-linear approaches often raise some doubt as to the reliability of their results concerning the system response in high frequency regions. In these frequency regions the equivalent-linear methods may underestimate the surface motion by as much as a factor of two or more. Although studies are complete in their methods of analysis, they inevitably provide applications pertaining only to a few specific soil systems, and do not lead to general conclusions about soil behavior. This report attempts to provide a general picture of the soil response through the use of a single-degree-of-freedom non-linear-hysteretic model. Although the investigation is based on a specific type of nonlinearity and a set of dynamic soil properties, the method described does not limit itself to these assumptions and is equally applicable to other types of nonlinearity and soil parameters.
NASA Astrophysics Data System (ADS)
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.
The linear -- non-linear frontier for the Goldstone Higgs
Gavela, M. B.; Kanshin, K.; Machado, P. A. N.; ...
2016-12-01
The minimalmore » $SO(5)/SO(4)$ sigma model is used as a template for the ultraviolet completion of scenarios in which the Higgs particle is a low-energy remnant of some high-energy dynamics, enjoying a (pseudo) Nambu-Goldstone boson ancestry. Varying the $$\\sigma$$ mass allows to sweep from the perturbative regime to the customary non-linear implementations. The low-energy benchmark effective non-linear Lagrangian for bosons and fermions is obtained, determining as well the operator coefficients including linear corrections. At first order in the latter, three effective bosonic operators emerge which are independent of the explicit soft breaking assumed. The Higgs couplings to vector bosons and fermions turn out to be quite universal: the linear corrections are proportional to the explicit symmetry breaking parameters. Furthermore, we define an effective Yukawa operator which allows a simple parametrization and comparison of different heavy fermion ultraviolet completions. In addition, one particular fermionic completion is explored in detail, obtaining the corresponding leading low-energy fermionic operators.« less
NASA Astrophysics Data System (ADS)
Jeong, Soon-Jong; Kim, Min-Soo; Lee, Dae-Su; Song, Jae-Sung; Cho, Kyung-Ho
2013-12-01
We investigated the piezoelectric properties and the generation of voltage and power under the mechanical compressive loads for three types of piezoelectric ceramics 0.2Pb(Mg1/3Nb2/3)O3-0.8Pb(Zr0.475Ti0.525)O3 (soft-PZT), 0.1Pb(Mg1/3Sb2/3)O3- 0.9Pb(Zr0.475Ti0.525)O3 (hard-PZT) and [0.675Pb(Mg1/3Nb2/3)O3-0.35PbTiO3]+5 wt% BaTiO3 (textured-PMNT). The piezoelectric d 33 coefficients of all specimens increased with increasing compressive load. The generated voltage and power showed a linear relation and square relation to the applied stress, respectively. These results were larger than those calculated using the simple piezoelectric equation due to the non-linear characteristics of the ceramics, so they were evaluated with a simple model based on a non-linear relation.
A necessary condition for dispersal driven growth of populations with discrete patch dynamics.
Guiver, Chris; Packman, David; Townley, Stuart
2017-07-07
We revisit the question of when can dispersal-induced coupling between discrete sink populations cause overall population growth? Such a phenomenon is called dispersal driven growth and provides a simple explanation of how dispersal can allow populations to persist across discrete, spatially heterogeneous, environments even when individual patches are adverse or unfavourable. For two classes of mathematical models, one linear and one non-linear, we provide necessary conditions for dispersal driven growth in terms of the non-existence of a common linear Lyapunov function, which we describe. Our approach draws heavily upon the underlying positive dynamical systems structure. Our results apply to both discrete- and continuous-time models. The theory is illustrated with examples and both biological and mathematical conclusions are drawn. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan
2017-01-01
This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.
Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP
NASA Astrophysics Data System (ADS)
Russo, A.; Trigo, R. M.
2003-04-01
A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}
Metric versus observable operator representation, higher spin models
NASA Astrophysics Data System (ADS)
Fring, Andreas; Frith, Thomas
2018-02-01
We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.
Linear complementarity formulation for 3D frictional sliding problems
Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc
2012-01-01
Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.
THE RESPONSE OF DRUG EXPENDITURE TO NON-LINEAR CONTRACT DESIGN: EVIDENCE FROM MEDICARE PART D*
Einav, Liran; Finkelstein, Amy; Schrimpf, Paul
2016-01-01
We study the demand response to non-linear price schedules using data on insurance contracts and prescription drug purchases in Medicare Part D. We exploit the kink in individuals’ budget set created by the famous “donut hole,” where insurance becomes discontinuously much less generous on the margin, to provide descriptive evidence of the drug purchase response to a price increase. We then specify and estimate a simple dynamic model of drug use that allows us to quantify the spending response along the entire non-linear budget set. We use the model for counterfactual analysis of the increase in spending from “filling” the donut hole, as will be required by 2020 under the Affordable Care Act. In our baseline model, which considers spending decisions within a single year, we estimate that “filling” the donut hole will increase annual drug spending by about $150, or about 8 percent. About one-quarter of this spending increase reflects “anticipatory” behavior, coming from beneficiaries whose spending prior to the policy change would leave them short of reaching the donut hole. We also present descriptive evidence of cross-year substitution of spending by individuals who reach the kink, which motivates a simple extension to our baseline model that allows – in a highly stylized way – for individuals to engage in such cross year substitution. Our estimates from this extension suggest that a large share of the $150 drug spending increase could be attributed to cross-year substitution, and the net increase could be as little as $45 per year. PMID:26769984
Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V
2003-10-12
The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
NASA Astrophysics Data System (ADS)
Milani, Gabriele; Olivito, Renato S.; Tralli, Antonio
2014-10-01
The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim of both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet's mechanism. The results obtained are compared with those provided by the numerical model.
A constitutive model for the warp-weft coupled non-linear behavior of knitted biomedical textiles.
Yeoman, Mark S; Reddy, Daya; Bowles, Hellmut C; Bezuidenhout, Deon; Zilla, Peter; Franz, Thomas
2010-11-01
Knitted textiles have been used in medical applications due to their high flexibility and low tendency to fray. Their mechanics have, however, received limited attention. A constitutive model for soft tissue using a strain energy function was extended, by including shear and increasing the number and order of coefficients, to represent the non-linear warp-weft coupled mechanics of coarse textile knits under uniaxial tension. The constitutive relationship was implemented in a commercial finite element package. The model and its implementation were verified and validated for uniaxial tension and simple shear using patch tests and physical test data of uniaxial tensile tests of four very different knitted fabric structures. A genetic algorithm with step-wise increase in resolution and linear reduction in range of the search space was developed for the optimization of the fabric model coefficients. The numerically predicted stress-strain curves exhibited non-linear stiffening characteristic for fabrics. For three fabrics, the predicted mechanics correlated well with physical data, at least in one principal direction (warp or weft), and moderately in the other direction. The model exhibited limitations in approximating the linear elastic behavior of the fourth fabric. With proposals to address this limitation and to incorporate time-dependent changes in the fabric mechanics associated with tissue ingrowth, the constitutive model offers a tool for the design of tissue regenerative knit textile implants. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
FDATMOS16 non-linear partitioning and organic volatility distributions in urban aerosols
Madronich, Sasha; Kleinman, Larry; Conley, Andrew; ...
2015-12-17
Gas-to-particle partitioning of organic aerosols (OA) is represented in most models by Raoult’s law, and depends on the existing mass of particles into which organic gases can dissolve. This raises the possibility of non-linear response of particle-phase OA to the emissions of precursor volatile organic compounds (VOCs) that contribute to this partitioning mass. Implications for air quality management are evident: A strong non-linear dependence would suggest that reductions in VOC emission would have a more-than-proportionate benefit in lowering ambient OA concentrations. Chamber measurements on simple VOC mixtures generally confirm the non-linear scaling between OA and VOCs, usually stated as amore » mass-dependence of the measured OA yields. However, for realistic ambient conditions including urban settings, no single component dominates the composition of the organic particles, and deviations from linearity are presumed to be small. Here we re-examine the linearity question using volatility spectra from several sources: (1) chamber studies of selected aerosols, (2) volatility inferred for aerosols sampled in two megacities, Mexico City and Paris, and (3) an explicit chemistry model (GECKO-A). These few available volatility distributions suggest that urban OA may be only slightly super-linear, with most values of the sensitivity exponent in the range 1.1-1.3, also substantially lower than seen in chambers for some specific aerosols. Furthermore, the rather low values suggest that OA concentrations in megacities are not an inevitable convergence of non-linear effects, but can be addressed (much like in smaller urban areas) by proportionate reductions in emissions.« less
NASA Astrophysics Data System (ADS)
Falvo, Cyril
2018-02-01
The theory of linear and non-linear infrared response of vibrational Holstein polarons in one-dimensional lattices is presented in order to identify the spectral signatures of self-trapping phenomena. Using a canonical transformation, the optical response is computed from the small polaron point of view which is valid in the anti-adiabatic limit. Two types of phonon baths are considered: optical phonons and acoustical phonons, and simple expressions are derived for the infrared response. It is shown that for the case of optical phonons, the linear response can directly probe the polaron density of states. The model is used to interpret the experimental spectrum of crystalline acetanilide in the C=O range. For the case of acoustical phonons, it is shown that two bound states can be observed in the two-dimensional infrared spectrum at low temperature. At high temperature, analysis of the time-dependence of the two-dimensional infrared spectrum indicates that bath mediated correlations slow down spectral diffusion. The model is used to interpret the experimental linear-spectroscopy of model α-helix and β-sheet polypeptides. This work shows that the Davydov Hamiltonian cannot explain the observations in the NH stretching range.
NASA Astrophysics Data System (ADS)
Rostami, M.; Zeitlin, V.
2017-12-01
We show how the properties of the Mars polar vortex can be understood in the framework of a simple shallow-water type model obtained by vertical averaging of the adiabatic “primitive” equations, and “improved” by inclusion of thermal relaxation and convective fluxes due to the phase transitions of CO 2, the major constituent of the Martian atmosphere. We perform stability analysis of the vortex, show that corresponding mean zonal flow is unstable, and simulate numerically non-linear saturation of the instability. We show in this way that, while non-linear adiabatic saturation of the instability tends to reorganize the vortex, the diabatic effects prevent this, and thus provide an explanation of the vortex form and longevity.
An instrument to measure mechanical up-conversion phenomena in metals in the elastic regime
NASA Astrophysics Data System (ADS)
Vajente, G.; Quintero, E. A.; Ni, X.; Arai, K.; Gustafson, E. K.; Robertson, N. A.; Sanchez, E. J.; Greer, J. R.; Adhikari, R. X.
2016-06-01
Crystalline materials, such as metals, are known to exhibit deviation from a simple linear relation between strain and stress when the latter exceeds the yield stress. In addition, it has been shown that metals respond to varying external stress in a discontinuous way in this regime, exhibiting discrete releases of energy. This crackling noise has been extensively studied both experimentally and theoretically when the metals are operating in the plastic regime. In our study, we focus on the behavior of metals in the elastic regime, where the stresses are well below the yield stress. We describe an instrument that aims to characterize non-linear mechanical noise in metals when stressed in the elastic regime. In macroscopic systems, this phenomenon is expected to manifest as a non-stationary noise modulated by external disturbances applied to the material, a form of mechanical up-conversion of noise. The main motivation for this work is for the case of maraging steel components (cantilevers and wires) in the suspension systems of terrestrial gravitational wave detectors. Such instruments are planned to reach very ambitious displacement sensitivities, and therefore mechanical noise in the cantilevers could prove to be a limiting factor for the detectors' final sensitivities, mainly due to non-linear up-conversion of low frequency residual seismic motion to the frequencies of interest for the gravitational wave observations. We describe here the experimental setup, with a target sensitivity of 10-15 m/ √{ Hz } in the frequency range of 10-1000 Hz, a simple phenomenological model of the non-linear mechanical noise, and the analysis method that is inspired by this model.
A Simple Model for Nonlinear Confocal Ultrasonic Beams
NASA Astrophysics Data System (ADS)
Zhang, Dong; Zhou, Lin; Si, Li-Sheng; Gong, Xiu-Fen
2007-01-01
A confocally and coaxially arranged pair of focused transmitter and receiver represents one of the best geometries for medical ultrasonic imaging and non-invasive detection. We develop a simple theoretical model for describing the nonlinear propagation of a confocal ultrasonic beam in biological tissues. On the basis of the parabolic approximation and quasi-linear approximation, the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is solved by using the angular spectrum approach. Gaussian superposition technique is applied to simplify the solution, and an analytical solution for the second harmonics in the confocal ultrasonic beam is presented. Measurements are performed to examine the validity of the theoretical model. This model provides a preliminary model for acoustic nonlinear microscopy.
Non Debye approximation on specific heat of solids
NASA Astrophysics Data System (ADS)
Bhattacharjee, Ruma; Das, Anamika; Sarkar, A.
2018-05-01
A simple non Debye frequency spectrum is proposed. The normalized frequency spectrum is compared to that of Debye spectrum. The proposed spectrum, provides a good account of low frequency phonon density of states, which gives a linear temperature variation at low temperature in contrast to Debye T3 law. It has been analyzed that the proposed model provides a good account of excess specific heat for nanostructure solid.
Identifying fMRI Model Violations with Lagrange Multiplier Tests
Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor
2013-01-01
The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665
Wang, Yubo; Veluvolu, Kalyana C
2017-06-14
It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.
Relating Stellar Cycle Periods to Dynamo Calculations
NASA Technical Reports Server (NTRS)
Tobias, S. M.
1998-01-01
Stellar magnetic activity in slowly rotating stars is often cyclic, with the period of the magnetic cycle depending critically on the rotation rate and the convective turnover time of the star. Here we show that the interpretation of this law from dynamo models is not a simple task. It is demonstrated that the period is (unsurprisingly) sensitive to the precise type of non-linearity employed. Moreover the calculation of the wave-speed of plane-wave solutions does not (as was previously supposed) give an indication of the magnetic period in a more realistic dynamo model, as the changes in length-scale of solutions are not easily captured by this approach. Progress can be made, however, by considering a realistic two-dimensional model, in which the radial length-scale of waves is included. We show that it is possible in this case to derive a more robust relation between cycle period and dynamo number. For all the non-linearities considered in the most realistic model, the magnetic cycle period is a decreasing function of IDI (the amplitude of the dynamo number). However, discriminating between different non-linearities is difficult in this case and care must therefore be taken before advancing explanations for the magnetic periods of stars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madronich, Sasha; Kleinman, Larry; Conley, Andrew
Gas-to-particle partitioning of organic aerosols (OA) is represented in most models by Raoult’s law, and depends on the existing mass of particles into which organic gases can dissolve. This raises the possibility of non-linear response of particle-phase OA to the emissions of precursor volatile organic compounds (VOCs) that contribute to this partitioning mass. Implications for air quality management are evident: A strong non-linear dependence would suggest that reductions in VOC emission would have a more-than-proportionate benefit in lowering ambient OA concentrations. Chamber measurements on simple VOC mixtures generally confirm the non-linear scaling between OA and VOCs, usually stated as amore » mass-dependence of the measured OA yields. However, for realistic ambient conditions including urban settings, no single component dominates the composition of the organic particles, and deviations from linearity are presumed to be small. Here we re-examine the linearity question using volatility spectra from several sources: (1) chamber studies of selected aerosols, (2) volatility inferred for aerosols sampled in two megacities, Mexico City and Paris, and (3) an explicit chemistry model (GECKO-A). These few available volatility distributions suggest that urban OA may be only slightly super-linear, with most values of the sensitivity exponent in the range 1.1-1.3, also substantially lower than seen in chambers for some specific aerosols. Furthermore, the rather low values suggest that OA concentrations in megacities are not an inevitable convergence of non-linear effects, but can be addressed (much like in smaller urban areas) by proportionate reductions in emissions.« less
Non-Linear Dynamics of Saturn's Rings
NASA Astrophysics Data System (ADS)
Esposito, L. W.
2016-12-01
Non-linear processes can explain why Saturn's rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. Stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, that push the system across thresholds that lead to persistent states. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like `straw' that can explain the halo morphology and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; this requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping explains both small and large particles at resonances. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating it as an asymmetric random walk with reflecting boundaries determines the power law index, using results of numerical simulations in the tidal environment. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn's rings a chaotic non-linear driven system?
Non-Linear Dynamics of Saturn’s Rings
NASA Astrophysics Data System (ADS)
Esposito, Larry W.
2015-11-01
Non-linear processes can explain why Saturn’s rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. We find that stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, pushing the system across thresholds that lead to persistent states.Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit.Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like ‘straw’ that can explain the halo structure and spectroscopy: This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn’s rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn’s rings a chaotic non-linear driven system?
A non-modal analytical method to predict turbulent properties applied to the Hasegawa-Wakatani model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, B., E-mail: friedman11@llnl.gov; Lawrence Livermore National Laboratory, Livermore, California 94550; Carter, T. A.
2015-01-15
Linear eigenmode analysis often fails to describe turbulence in model systems that have non-normal linear operators and thus nonorthogonal eigenmodes, which can cause fluctuations to transiently grow faster than expected from eigenmode analysis. When combined with energetically conservative nonlinear mode mixing, transient growth can lead to sustained turbulence even in the absence of eigenmode instability. Since linear operators ultimately provide the turbulent fluctuations with energy, it is useful to define a growth rate that takes into account non-modal effects, allowing for prediction of energy injection, transport levels, and possibly even turbulent onset in the subcritical regime. We define such amore » non-modal growth rate using a relatively simple model of the statistical effect that the nonlinearities have on cross-phases and amplitude ratios of the system state variables. In particular, we model the nonlinearities as delta-function-like, periodic forces that randomize the state variables once every eddy turnover time. Furthermore, we estimate the eddy turnover time to be the inverse of the least stable eigenmode frequency or growth rate, which allows for prediction without nonlinear numerical simulation. We test this procedure on the 2D and 3D Hasegawa-Wakatani model [A. Hasegawa and M. Wakatani, Phys. Rev. Lett. 50, 682 (1983)] and find that the non-modal growth rate is a good predictor of energy injection rates, especially in the strongly non-normal, fully developed turbulence regime.« less
A non-modal analytical method to predict turbulent properties applied to the Hasegawa-Wakatani model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, B.; Carter, T. A.
2015-01-15
Linear eigenmode analysis often fails to describe turbulence in model systems that have non-normal linear operators and thus nonorthogonal eigenmodes, which can cause fluctuations to transiently grow faster than expected from eigenmode analysis. When combined with energetically conservative nonlinear mode mixing, transient growth can lead to sustained turbulence even in the absence of eigenmode instability. Since linear operators ultimately provide the turbulent fluctuations with energy, it is useful to define a growth rate that takes into account non-modal effects, allowing for prediction of energy injection, transport levels, and possibly even turbulent onset in the subcritical regime. Here, we define suchmore » a non-modal growth rate using a relatively simple model of the statistical effect that the nonlinearities have on cross-phases and amplitude ratios of the system state variables. In particular, we model the nonlinearities as delta-function-like, periodic forces that randomize the state variables once every eddy turnover time. Furthermore, we estimate the eddy turnover time to be the inverse of the least stable eigenmode frequency or growth rate, which allows for prediction without nonlinear numerical simulation. Also, we test this procedure on the 2D and 3D Hasegawa-Wakatani model [A. Hasegawa and M. Wakatani, Phys. Rev. Lett. 50, 682 (1983)] and find that the non-modal growth rate is a good predictor of energy injection rates, especially in the strongly non-normal, fully developed turbulence regime.« less
Linear and Non-Linear Visual Feature Learning in Rat and Humans
Bossens, Christophe; Op de Beeck, Hans P.
2016-01-01
The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201
Exponents of non-linear clustering in scale-free one-dimensional cosmological simulations
NASA Astrophysics Data System (ADS)
Benhaiem, David; Joyce, Michael; Sicard, François
2013-03-01
One-dimensional versions of dissipationless cosmological N-body simulations have been shown to share many qualitative behaviours of the three-dimensional problem. Their interest lies in the fact that they can resolve a much greater range of time and length scales, and admit exact numerical integration. We use such models here to study how non-linear clustering depends on initial conditions and cosmology. More specifically, we consider a family of models which, like the three-dimensional Einstein-de Sitter (EdS) model, lead for power-law initial conditions to self-similar clustering characterized in the strongly non-linear regime by power-law behaviour of the two-point correlation function. We study how the corresponding exponent γ depends on the initial conditions, characterized by the exponent n of the power spectrum of initial fluctuations, and on a single parameter κ controlling the rate of expansion. The space of initial conditions/cosmology divides very clearly into two parts: (1) a region in which γ depends strongly on both n and κ and where it agrees very well with a simple generalization of the so-called stable clustering hypothesis in three dimensions; and (2) a region in which γ is more or less independent of both the spectrum and the expansion of the universe. The boundary in (n, κ) space dividing the `stable clustering' region from the `universal' region is very well approximated by a `critical' value of the predicted stable clustering exponent itself. We explain how this division of the (n, κ) space can be understood as a simple physical criterion which might indeed be expected to control the validity of the stable clustering hypothesis. We compare and contrast our findings to results in three dimensions, and discuss in particular the light they may throw on the question of `universality' of non-linear clustering in this context.
USDA-ARS?s Scientific Manuscript database
The fuzzy logic algorithm has the ability to describe knowledge in a descriptive human-like manner in the form of simple rules using linguistic variables, and provides a new way of modeling uncertain or naturally fuzzy hydrological processes like non-linear rainfall-runoff relationships. Fuzzy infe...
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
Application of the Lienard-Wiechert solution to a lightning return stroke model
NASA Technical Reports Server (NTRS)
Meneghini, R.
1983-01-01
The electric and magnetic fields associated with the lightning return stroke are expressed as a convolution of the current waveform shape and the fields generated by a moving charge of amplitude one (i.e., the Lienard-Wiechert solution for a unit charge). The representation can be used to compute the fields produced by a current waveform of non-uniform velocity that propagates along a filament of arbitrary, but finite, curvature. To study numerically the effects of linear charge acceleration and channel curvature two simple channel models are used: the linear and the hyperbolic.
Application of the Lienard-Wiechert solution to a lightning return stroke model
NASA Technical Reports Server (NTRS)
Meneghini, R.
1984-01-01
The electric and magnetic fields associated with the lightning return stroke are expressed as a convolution of the current waveform shape and the fields generated by a moving charge of amplitude one (i.e., the Lienard-Wiechert solution for a unit charge). The representation can be used to compute the fields produced by a current waveform of non-uniform velocity that propagates along a filament of arbitrary, but finite, curvature. To study numerically the effects of linear charge acceleration and channel curvature two simple channel models are used: the linear and the hyperbolic.
More memory under evolutionary learning may lead to chaos
NASA Astrophysics Data System (ADS)
Diks, Cees; Hommes, Cars; Zeppini, Paolo
2013-02-01
We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Wear-caused deflection evolution of a slide rail, considering linear and non-linear wear models
NASA Astrophysics Data System (ADS)
Kim, Dongwook; Quagliato, Luca; Park, Donghwi; Murugesan, Mohanraj; Kim, Naksoo; Hong, Seokmoo
2017-05-01
The research presented in this paper details an experimental-numerical approach for the quantitative correlation between wear and end-point deflection in a slide rail. Focusing the attention on slide rail utilized in white-goods applications, the aim is to evaluate the number of cycles the slide rail can operate, under different load conditions, before it should be replaced due to unacceptable end-point deflection. In this paper, two formulations are utilized to describe the wear: Archard model for the linear wear and Lemaitre damage model for the nonlinear wear. The linear wear gradually reduces the surface of the slide rail whereas the nonlinear one accounts for the surface element deletion (i.e. due to pitting). To determine the constants to use in the wear models, simple tension test and sliding wear test, by utilizing a designed and developed experiment machine, have been carried out. A full slide rail model simulation has been implemented in ABAQUS including both linear and non-linear wear models and the results have been compared with those of the real rails under different load condition, provided by the rail manufacturer. The comparison between numerically estimated and real rail results proved the reliability of the developed numerical model, limiting the error in a ±10% range. The proposed approach allows predicting the displacement vs cycle curves, parametrized for different loads and, based on a chosen failure criterion, to predict the lifetime of the rail.
NASA Astrophysics Data System (ADS)
Rahimi, Zaher; Sumelka, Wojciech; Yang, Xiao-Jun
2017-11-01
The application of fractional calculus in fractional models (FMs) makes them more flexible than integer models inasmuch they can conclude all of integer and non-integer operators. In other words FMs let us use more potential of mathematics to modeling physical phenomena due to the use of both integer and fractional operators to present a better modeling of problems, which makes them more flexible and powerful. In the present work, a new fractional nonlocal model has been proposed, which has a simple form and can be used in different problems due to the simple form of numerical solutions. Then the model has been used to govern equations of the motion of the Timoshenko beam theory (TBT) and Euler-Bernoulli beam theory (EBT). Next, free vibration of the Timoshenko and Euler-Bernoulli simply-supported (S-S) beam has been investigated. The Galerkin weighted residual method has been used to solve the non-linear governing equations.
Non-Linear Analysis of Mode II Fracture in the end Notched Flexure Beam
NASA Astrophysics Data System (ADS)
Rizov, V.
2016-03-01
Analysis is carried-out of fracture in the End Notched Flex- ure (ENF) beam configuration, taking into account the material nonlin- earity. For this purpose, the J-integral approach is applied. A non-linear model, based on the Classical beam theory is used. The mechanical be- haviour of the ENF configuration is described by the Ramberg-Osgood stress-strain curve. It is assumed that the material possesses the same properties in tension and compression. The influence is evaluated of the material constants in the Ramberg-Osgood stress-strain equation on the fracture behaviour. The effect of the crack length on the J-integral value is investigated, too. The analytical approach, developed in the present paper, is very useful for parametric analyses, since the simple formulae obtained capture the essentials of the non-linear fracture in the ENF con- figuration.
Propagating synchrony in feed-forward networks
Jahnke, Sven; Memmesheimer, Raoul-Martin; Timme, Marc
2013-01-01
Coordinated patterns of precisely timed action potentials (spikes) emerge in a variety of neural circuits but their dynamical origin is still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains) may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of non-linear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons. PMID:24298251
Cosmic velocity-gravity relation in redshift space
NASA Astrophysics Data System (ADS)
Colombi, Stéphane; Chodorowski, Michał J.; Teyssier, Romain
2007-02-01
We propose a simple way to estimate the parameter β ~= Ω0.6/b from 3D galaxy surveys, where Ω is the non-relativistic matter-density parameter of the Universe and b is the bias between the galaxy distribution and the total matter distribution. Our method consists in measuring the relation between the cosmological velocity and gravity fields, and thus requires peculiar velocity measurements. The relation is measured directly in redshift space, so there is no need to reconstruct the density field in real space. In linear theory, the radial components of the gravity and velocity fields in redshift space are expected to be tightly correlated, with a slope given, in the distant observer approximation, by We test extensively this relation using controlled numerical experiments based on a cosmological N-body simulation. To perform the measurements, we propose a new and rather simple adaptive interpolation scheme to estimate the velocity and the gravity field on a grid. One of the most striking results is that non-linear effects, including `fingers of God', affect mainly the tails of the joint probability distribution function (PDF) of the velocity and gravity field: the 1-1.5 σ region around the maximum of the PDF is dominated by the linear theory regime, both in real and redshift space. This is understood explicitly by using the spherical collapse model as a proxy of non-linear dynamics. Applications of the method to real galaxy catalogues are discussed, including a preliminary investigation on homogeneous (volume-limited) `galaxy' samples extracted from the simulation with simple prescriptions based on halo and substructure identification, to quantify the effects of the bias between the galaxy distribution and the total matter distribution, as well as the effects of shot noise.
Neophytou, Andreas M; Picciotto, Sally; Brown, Daniel M; Gallagher, Lisa E; Checkoway, Harvey; Eisen, Ellen A; Costello, Sadie
2018-02-13
Prolonged exposures can have complex relationships with health outcomes, as timing, duration, and intensity of exposure are all potentially relevant. Summary measures such as cumulative exposure or average intensity of exposure may not fully capture these relationships. We applied penalized and unpenalized distributed lag non-linear models (DLNMs) with flexible exposure-response and lag-response functions in order to examine the association between crystalline silica exposure and mortality from lung cancer and non-malignant respiratory disease in a cohort study of 2,342 California diatomaceous earth workers, followed 1942-2011. We also assessed associations using simple measures of cumulative exposure assuming linear exposure-response and constant lag-response. Measures of association from DLNMs were generally higher than from simpler models. Rate ratios from penalized DLNMs corresponding to average daily exposures of 0.4 mg/m3 during lag years 31-50 prior to the age of observed cases were 1.47 (95% confidence interval (CI) 0.92, 2.35) for lung cancer and 1.80 (95% CI: 1.14, 2.85) for non-malignant respiratory disease. Rate ratios from the simpler models for the same exposure scenario were 1.15 (95% CI: 0.89-1.48) and 1.23 (95% CI: 1.03-1.46) respectively. Longitudinal cohort studies of prolonged exposures and chronic health outcomes should explore methods allowing for flexibility and non-linearities in the exposure-lag-response. © The Author(s) 2018. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
A diffusion model of protected population on bilocal habitat with generalized resource
NASA Astrophysics Data System (ADS)
Vasilyev, Maxim D.; Trofimtsev, Yuri I.; Vasilyeva, Natalya V.
2017-11-01
A model of population distribution in a two-dimensional area divided by an ecological barrier, i.e. the boundaries of natural reserve, is considered. Distribution of the population is defined by diffusion, directed migrations and areal resource. The exchange of specimens occurs between two parts of the habitat. The mathematical model is presented in the form of a boundary value problem for a system of non-linear parabolic equations with variable parameters of diffusion and growth function. The splitting space variables, sweep method and simple iteration methods were used for the numerical solution of a system. A set of programs was coded in Python. Numerical simulation results for the two-dimensional unsteady non-linear problem are analyzed in detail. The influence of migration flow coefficients and functions of natural birth/death ratio on the distributions of population densities is investigated. The results of the research would allow to describe the conditions of the stable and sustainable existence of populations in bilocal habitat containing the protected and non-protected zones.
NASA Astrophysics Data System (ADS)
Frehner, Marcel; Amschwand, Dominik; Gärtner-Roer, Isabelle
2016-04-01
Rockglaciers consist of unconsolidated rock fragments (silt/sand-rock boulders) with interstitial ice; hence their creep behavior (i.e., rheology) may deviate from the simple and well-known flow-laws for pure ice. Here we constrain the non-linear viscous flow law that governs rockglacier creep based on geomorphological observations. We use the Murtèl rockglacier (upper Engadin valley, SE Switzerland) as a case study, for which high-resolution digital elevation models (DEM), time-lapse borehole deformation data, and geophysical soundings exist that reveal the exterior and interior architecture and dynamics of the landform. Rockglaciers often feature a prominent furrow-and-ridge topography. For the Murtèl rockglacier, Frehner et al. (2015) reproduced the wavelength, amplitude, and distribution of the furrow-and-ridge morphology using a linear viscous (Newtonian) flow model. Arenson et al. (2002) presented borehole deformation data, which highlight the basal shear zone at about 30 m depth and a curved deformation profile above the shear zone. Similarly, the furrow-and-ridge morphology also exhibits a curved geometry in map view. Hence, the surface morphology and the borehole deformation data together describe a curved 3D geometry, which is close to, but not quite parabolic. We use a high-resolution DEM to quantify the curved geometry of the Murtèl furrow-and-ridge morphology. We then calculate theoretical 3D flow geometries using different non-linear viscous flow laws. By comparing them to the measured curved 3D geometry (i.e., both surface morphology and borehole deformation data), we can determine the most adequate flow-law that fits the natural data best. Linear viscous models result in perfectly parabolic flow geometries; non-linear creep leads to localized deformation at the sides and bottom of the rockglacier while the deformation in the interior and top are less intense. In other words, non-linear creep results in non-parabolic flow geometries. Both the linear (power-law exponent, n=1) and strongly non-linear models (n=10) do not match the measured data well. However, the moderately non-linear models (n=2-3) match the data quite well indicating that the creep of the Murtèl rockglacier is governed by a moderately non-linear viscous flow law with a power-law exponent close to the one of pure ice. Our results are crucial for improving existing numerical models of rockglacier flow that currently use simplified (i.e., linear viscous) flow-laws. References: Arenson L., Hoelzle M., and Springman S., 2002: Borehole deformation measurements and internal structure of some rock glaciers in Switzerland, Permafrost and Periglacial Processes 13, 117-135. Frehner M., Ling A.H.M., and Gärtner-Roer I., 2015: Furrow-and-ridge morphology on rockglaciers explained by gravity-driven buckle folding: A case study from the Murtèl rockglacier (Switzerland), Permafrost and Periglacial Processes 26, 57-66.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Kelvin-Voigt model of wave propagation in fragmented geomaterials with impact damping
NASA Astrophysics Data System (ADS)
Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady
2017-04-01
When a wave propagates through real materials, energy dissipation occurs. The effect of loss of energy in homogeneous materials can be accounted for by using simple viscous models. However, a reliable model representing the effect in fragmented geomaterials has not been established yet. The main reason for that is a mechanism how vibrations are transmitted between the elements (fragments) in these materials. It is hypothesised that the fragments strike against each other, in the process of oscillation, and the impacts lead to the energy loss. We assume that the energy loss is well represented by the restitution coefficient. The principal element of this concept is the interaction of two adjacent blocks. We model it by a simple linear oscillator (a mass on an elastic spring) with an additional condition: each time the system travels through the neutral point, where the displacement is equal to zero, the velocity reduces by multiplying itself by the restitution coefficient, which characterises an impact of the fragments. This additional condition renders the system non-linear. We show that the behaviour of such a model averaged over times much larger than the system period can approximately be represented by a conventional linear oscillator with linear damping characterised by a damping coefficient expressible through the restitution coefficient. Based on this the wave propagation at times considerably greater than the resonance period of oscillations of the neighbouring blocks can be modelled using the Kelvin-Voigt model. The wave velocities and the dispersion relations are obtained.
Hypo-Elastic Model for Lung Parenchyma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freed, Alan D.; Einstein, Daniel R.
2012-03-01
A simple elastic isotropic constitutive model for the spongy tissue in lung is derived from the theory of hypoelasticity. The model is shown to exhibit a pressure dependent behavior that has been interpreted by some as indicating extensional anisotropy. In contrast, we show that this behavior arises natural from an analysis of isotropic hypoelastic invariants, and is a likely result of non-linearity, not anisotropy. The response of the model is determined analytically for several boundary value problems used for material characterization. These responses give insight into both the material behavior as well as admissible bounds on parameters. The model ismore » characterized against published experimental data for dog lung. Future work includes non-elastic model behavior.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milani, Gabriele, E-mail: milani@stru.polimi.it; Olivito, Renato S.; Tralli, Antonio
2014-10-06
The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim ofmore » both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet’s mechanism. The results obtained are compared with those provided by the numerical model.« less
The role of model dynamics in ensemble Kalman filter performance for chaotic systems
Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.
2011-01-01
The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.
Linear-algebraic bath transformation for simulating complex open quantum systems
Huh, Joonsuk; Mostame, Sarah; Fujita, Takatoshi; ...
2014-12-02
In studying open quantum systems, the environment is often approximated as a collection of non-interacting harmonic oscillators, a configuration also known as the star-bath model. It is also well known that the star-bath can be transformed into a nearest-neighbor interacting chain of oscillators. The chain-bath model has been widely used in renormalization group approaches. The transformation can be obtained by recursion relations or orthogonal polynomials. Based on a simple linear algebraic approach, we propose a bath partition strategy to reduce the system-bath coupling strength. As a result, the non-interacting star-bath is transformed into a set of weakly coupled multiple parallelmore » chains. Furthermore, the transformed bath model allows complex problems to be practically implemented on quantum simulators, and it can also be employed in various numerical simulations of open quantum dynamics.« less
NASA Astrophysics Data System (ADS)
López-Ruiz, F. F.; Guerrero, J.; Aldaya, V.; Cossío, F.
2012-08-01
Using a quantum version of the Arnold transformation of classical mechanics, all quantum dynamical systems whose classical equations of motion are non-homogeneous linear second-order ordinary differential equations (LSODE), including systems with friction linear in velocity such as the damped harmonic oscillator, can be related to the quantum free-particle dynamical system. This implies that symmetries and simple computations in the free particle can be exported to the LSODE-system. The quantum Arnold transformation is given explicitly for the damped harmonic oscillator, and an algebraic connection between the Caldirola-Kanai model for the damped harmonic oscillator and the Bateman system will be sketched out.
NASA Astrophysics Data System (ADS)
Droghei, Riccardo; Salusti, Ettore
2013-04-01
Control of drilling parameters, as fluid pressure, mud weight, salt concentration is essential to avoid instabilities when drilling through shale sections. To investigate shale deformation, fundamental for deep oil drilling and hydraulic fracturing for gas extraction ("fracking"), a non-linear model of mechanic and chemo-poroelastic interactions among fluid, solute and the solid matrix is here discussed. The two equations of this model describe the isothermal evolution of fluid pressure and solute density in a fluid saturated porous rock. Their solutions are quick non-linear Burger's solitary waves, potentially destructive for deep operations. In such analysis the effect of diffusion, that can play a particular role in fracking, is investigated. Then, following Civan (1998), both diffusive and shock waves are applied to fine particles filtration due to such quick transients , their effect on the adjacent rocks and the resulting time-delayed evolution. Notice how time delays in simple porous media dynamics have recently been analyzed using a fractional derivative approach. To make a tentative comparison of these two deeply different methods,in our model we insert fractional time derivatives, i.e. a kind of time-average of the fluid-rocks interactions. Then the delaying effects of fine particles filtration is compared with fractional model time delays. All this can be seen as an empirical check of these fractional models.
NASA Astrophysics Data System (ADS)
Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.
2012-12-01
In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.
Parametric resonance in the early Universe—a fitting analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es
Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less
A Nanohelicoidal Nematic Liquid Crystal Formed by a Non-Linear Duplexed Hexamer.
Mandle, Richard J; Goodby, John W
2018-06-11
The twist-bend modulated nematic liquid-crystal phase exhibits formation of a nanometre-scale helical pitch in a fluid and spontaneous breaking of mirror symmetry, leading to a quasi-fluid state composed of chiral domains despite being composed of achiral materials. This phase was only observed for materials with two or more mesogenic units, the manner of attachment between which is always linear. Non-linear oligomers with a H-shaped hexamesogen are now found to exhibit both nematic and twist-bend modulated nematic phases. This shatters the assumption that a linear sequence of mesogenic units is a prerequisite for this phase, and points to this state of matter being exhibited by a wider range of self-assembling structures than was previously envisaged. These results support the double helix model of the TB phase as opposed to the simple heliconical model. This new class of materials could act as low-molecular-weight surrogates for cross-linked liquid-crystalline elastomers. © 2018 Die Autoren. Veröffentlicht von Wiley-VCH Verlag GmbH & Co. KGaA.
NASA Astrophysics Data System (ADS)
Montecinos, S.; Barrientos, P.
2006-03-01
A photochemical model of the atmosphere constitutes a non-linear, non-autonomous dynamical system, enforced by the Earth's rotation. Some studies have shown that the region of the mesopause tends towards non-linear responses such as period-doubling cascades and chaos. In these studies, simple go approximations for the diurnal variations of the photolysis rates are assumed. The goal of this article is to investigate what happens if the more realistic, calculated photolysis rates are introduced. It is found that, if the usual approximations-sinusoidal and step functions-are assumed, the responses of the system are similar: it converges to a 2-day periodic solution. If the more realistic, calculated diurnal cycle is introduced, a new 4-day subharmonic appear.
Regression-based model of skin diffuse reflectance for skin color analysis
NASA Astrophysics Data System (ADS)
Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi
2008-11-01
A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.
NASA Astrophysics Data System (ADS)
Bower, Dan J.; Sanan, Patrick; Wolf, Aaron S.
2018-01-01
The energy balance of a partially molten rocky planet can be expressed as a non-linear diffusion equation using mixing length theory to quantify heat transport by both convection and mixing of the melt and solid phases. Crucially, in this formulation the effective or eddy diffusivity depends on the entropy gradient, ∂S / ∂r , as well as entropy itself. First we present a simplified model with semi-analytical solutions that highlights the large dynamic range of ∂S / ∂r -around 12 orders of magnitude-for physically-relevant parameters. It also elucidates the thermal structure of a magma ocean during the earliest stage of crystal formation. This motivates the development of a simple yet stable numerical scheme able to capture the large dynamic range of ∂S / ∂r and hence provide a flexible and robust method for time-integrating the energy equation. Using insight gained from the simplified model, we consider a full model, which includes energy fluxes associated with convection, mixing, gravitational separation, and conduction that all depend on the thermophysical properties of the melt and solid phases. This model is discretised and evolved by applying the finite volume method (FVM), allowing for extended precision calculations and using ∂S / ∂r as the solution variable. The FVM is well-suited to this problem since it is naturally energy conserving, flexible, and intuitive to incorporate arbitrary non-linear fluxes that rely on lookup data. Special attention is given to the numerically challenging scenario in which crystals first form in the centre of a magma ocean. The computational framework we devise is immediately applicable to modelling high melt fraction phenomena in Earth and planetary science research. Furthermore, it provides a template for solving similar non-linear diffusion equations that arise in other science and engineering disciplines, particularly for non-linear functional forms of the diffusion coefficient.
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B; George, Kerry A; Cucinotta, Francis A
2016-01-01
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictions of the charge number and energy dependence of RBE's using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE's are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE's against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Comparisons of the resulting model parameters to those used in the NASA radiation quality factor function are discussed.
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B.; ...
2016-04-25
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictionsmore » of the charge number and energy dependence of RBE’s using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE’s are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE’s against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Finally, we discuss comparisons of the resulting model parameters to those used in the NASA radiation quality factor function.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B.
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictionsmore » of the charge number and energy dependence of RBE’s using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE’s are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE’s against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Finally, we discuss comparisons of the resulting model parameters to those used in the NASA radiation quality factor function.« less
Postglacial rebound with a non-Newtonian upper mantle and a Newtonian lower mantle rheology
NASA Technical Reports Server (NTRS)
Gasperini, Paolo; Yuen, David A.; Sabadini, Roberto
1992-01-01
A composite rheology is employed consisting of both linear and nonlinear creep mechanisms which are connected by a 'transition' stress. Background stress due to geodynamical processes is included. For models with a non-Newtonian upper-mantle overlying a Newtonian lower-mantle, the temporal responses of the displacements can reproduce those of Newtonian models. The average effective viscosity profile under the ice-load at the end of deglaciation turns out to be the crucial factor governing mantle relaxation. This can explain why simple Newtonian rheology has been successful in fitting the uplift data over formerly glaciated regions.
Non-linear stochastic growth rates and redshift space distortions
Jennings, Elise; Jennings, David
2015-04-09
The linear growth rate is commonly defined through a simple deterministic relation between the velocity divergence and the matter overdensity in the linear regime. We introduce a formalism that extends this to a non-linear, stochastic relation between θ = ∇ ∙ v(x,t)/aH and δ. This provides a new phenomenological approach that examines the conditional mean , together with the fluctuations of θ around this mean. We also measure these stochastic components using N-body simulations and find they are non-negative and increase with decreasing scale from ~10 per cent at k < 0.2 h Mpc -1 to 25 per cent atmore » k ~ 0.45 h Mpc -1 at z = 0. Both the stochastic relation and non-linearity are more pronounced for haloes, M ≤ 5 × 10 12 M ⊙ h -1, compared to the dark matter at z = 0 and 1. Non-linear growth effects manifest themselves as a rotation of the mean away from the linear theory prediction -f LTδ, where f LT is the linear growth rate. This rotation increases with wavenumber, k, and we show that it can be well-described by second-order Lagrangian perturbation theory (2LPT) fork < 0.1 h Mpc -1. Furthermore, the stochasticity in the θ – δ relation is not so simply described by 2LPT, and we discuss its impact on measurements of f LT from two-point statistics in redshift space. Furthermore, given that the relationship between δ and θ is stochastic and non-linear, this will have implications for the interpretation and precision of f LT extracted using models which assume a linear, deterministic expression.« less
NASA Astrophysics Data System (ADS)
Papagiannopoulou, Christina; Decubber, Stijn; Miralles, Diego; Demuzere, Matthias; Dorigo, Wouter; Verhoest, Niko; Waegeman, Willem
2017-04-01
Satellite data provide an abundance of information about crucial climatic and environmental variables. These data - consisting of global records, spanning up to 35 years and having the form of multivariate time series with different spatial and temporal resolutions - enable the study of key climate-vegetation interactions. Although methods which are based on correlations and linear models are typically used for this purpose, their assumptions for linearity about the climate-vegetation relationships are too simplistic. Therefore, we adopt a recently proposed non-linear Granger causality analysis [1], in which we incorporate spatial information, concatenating data from neighboring pixels and training a joint model on the combined data. Experimental results based on global data sets show that considering non-linear relationships leads to a higher explained variance of past vegetation dynamics, compared to simple linear models. Our approach consists of several steps. First, we compile an extensive database [1], which includes multiple data sets for land surface temperature, near-surface air temperature, surface radiation, precipitation, snow water equivalents and surface soil moisture. Based on this database, high-level features are constructed and considered as predictors in our machine-learning framework. These high-level features include (de-trended) seasonal anomalies, lagged variables, past cumulative variables, and extreme indices, all calculated based on the raw climatic data. Second, we apply a spatiotemporal non-linear Granger causality framework - in which the linear predictive model is substituted for a non-linear machine learning algorithm - in order to assess which of these predictor variables Granger-cause vegetation dynamics at each 1° pixel. We use the de-trended anomalies of Normalized Difference Vegetation Index (NDVI) to characterize vegetation, being the target variable of our framework. Experimental results indicate that climate strongly (Granger-)causes vegetation dynamics in most regions globally. More specifically, water availability is the most dominant vegetation driver, being the dominant vegetation driver in 54% of the vegetated surface. Furthermore, our results show that precipitation and soil moisture have prolonged impacts on vegetation in semiarid regions, with up to 10% of additional explained variance on the vegetation dynamics occurring three months later. Finally, hydro-climatic extremes seem to have a remarkable impact on vegetation, since they also explain up to 10% of additional variance of vegetation in certain regions despite their infrequent occurrence. References [1] Papagiannopoulou, C., Miralles, D. G., Verhoest, N. E. C., Dorigo, W. A., and Waegeman, W.: A non-linear Granger causality framework to investigate climate-vegetation dynamics, Geosci. Model Dev. Discuss., doi:10.5194/gmd-2016-266, in review, 2016.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.
2000-01-01
DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.
From 6D superconformal field theories to dynamic gauged linear sigma models
NASA Astrophysics Data System (ADS)
Apruzzi, Fabio; Hassler, Falk; Heckman, Jonathan J.; Melnikov, Ilarion V.
2017-09-01
Compactifications of six-dimensional (6D) superconformal field theories (SCFTs) on four- manifolds generate a large class of novel two-dimensional (2D) quantum field theories. We consider in detail the case of the rank-one simple non-Higgsable cluster 6D SCFTs. On the tensor branch of these theories, the gauge group is simple and there are no matter fields. For compactifications on suitably chosen Kähler surfaces, we present evidence that this provides a method to realize 2D SCFTs with N =(0 ,2 ) supersymmetry. In particular, we find that reduction on the tensor branch of the 6D SCFT yields a description of the same 2D fixed point that is described in the UV by a gauged linear sigma model (GLSM) in which the parameters are promoted to dynamical fields, that is, a "dynamic GLSM" (DGLSM). Consistency of the model requires the DGLSM to be coupled to additional non-Lagrangian sectors obtained from reduction of the antichiral two-form of the 6D theory. These extra sectors include both chiral and antichiral currents, as well as spacetime filling noncritical strings of the 6D theory. For each candidate 2D SCFT, we also extract the left- and right-moving central charges in terms of data of the 6D SCFT and the compactification manifold.
NASA Astrophysics Data System (ADS)
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.
NASA Astrophysics Data System (ADS)
Douguet, N.; Fonseca dos Santos, S.; Kokoouline, V.; Orel, A. E.
2015-01-01
We present results of a theoretical study on dissociative recombination of the HCNH+, HCO+ and N2H+ linear polyatomic ions at low energies using a simple theoretical model. In the present study, the indirect mechanism for recombination proceeds through the capture of the incoming electron in excited vibrational Rydberg states attached to the degenerate transverse modes of the linear ions. The strength of the non-adiabatic coupling responsible for dissociative recombination is determined directly from the near-threshold scattering matrix obtained numerically using the complex Kohn variational method. The final cross sections for the process are compared with available experimental data. It is demonstrated that at low collision energies, the major contribution to the dissociative recombination cross section is due to the indirect mechanism.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Leonov, Arkady I.
2002-01-01
The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.
Lateral interactions and non-equilibrium in surface kinetics
NASA Astrophysics Data System (ADS)
Menzel, Dietrich
2016-08-01
Work modelling reactions between surface species frequently use Langmuir kinetics, assuming that the layer is in internal equilibrium, and that the chemical potential of adsorbates corresponds to that of an ideal gas. Coverage dependences of reacting species and of site blocking are usually treated with simple power law coverage dependences (linear in the simplest case), neglecting that lateral interactions are strong in adsorbate and co-adsorbate layers which may influence kinetics considerably. My research group has in the past investigated many co-adsorbate systems and simple reactions in them. We have collected a number of examples where strong deviations from simple coverage dependences exist, in blocking, promoting, and selecting reactions. Interactions can range from those between next neighbors to larger distances, and can be quite complex. In addition, internal equilibrium in the layer as well as equilibrium distributions over product degrees of freedom can be violated. The latter effect leads to non-equipartition of energy over molecular degrees of freedom (for products) or non-equal response to those of reactants. While such behavior can usually be described by dynamic or kinetic models, the deeper reasons require detailed theoretical analysis. Here, a selection of such cases is reviewed to exemplify these points.
Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions
2007-09-01
C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to
Rodriguez-Sabate, Clara; Morales, Ingrid; Sanchez, Alberto; Rodriguez, Manuel
2017-01-01
The complexity of basal ganglia (BG) interactions is often condensed into simple models mainly based on animal data and that present BG in closed-loop cortico-subcortical circuits of excitatory/inhibitory pathways which analyze the incoming cortical data and return the processed information to the cortex. This study was aimed at identifying functional relationships in the BG motor-loop of 24 healthy-subjects who provided written, informed consent and whose BOLD-activity was recorded by MRI methods. The analysis of the functional interaction between these centers by correlation techniques and multiple linear regression showed non-linear relationships which cannot be suitably addressed with these methods. The multiple correspondence analysis (MCA), an unsupervised multivariable procedure which can identify non-linear interactions, was used to study the functional connectivity of BG when subjects were at rest. Linear methods showed different functional interactions expected according to current BG models. MCA showed additional functional interactions which were not evident when using lineal methods. Seven functional configurations of BG were identified with MCA, two involving the primary motor and somatosensory cortex, one involving the deepest BG (external-internal globus pallidum, subthalamic nucleus and substantia nigral), one with the input-output BG centers (putamen and motor thalamus), two linking the input-output centers with other BG (external pallidum and subthalamic nucleus), and one linking the external pallidum and the substantia nigral. The results provide evidence that the non-linear MCA and linear methods are complementary and should be best used in conjunction to more fully understand the nature of functional connectivity of brain centers.
Landau-Zener transitions and Dykhne formula in a simple continuum model
NASA Astrophysics Data System (ADS)
Dunham, Yujin; Garmon, Savannah
The Landau-Zener model describing the interaction between two linearly driven discrete levels is useful in describing many simple dynamical systems; however, no system is completely isolated from the surrounding environment. Here we examine a generalizations of the original Landau-Zener model to study simple environmental influences. We consider a model in which one of the discrete levels is replaced with a energy continuum, in which we find that the survival probability for the initially occupied diabatic level is unaffected by the presence of the continuum. This result can be predicted by assuming that each step in the evolution for the diabatic state evolves independently according to the Landau-Zener formula, even in the continuum limit. We also show that, at least for the simplest model, this result can also be predicted with the natural generalization of the Dykhne formula for open systems. We also observe dissipation as the non-escape probability from the discrete levels is no longer equal to one.
NASA Astrophysics Data System (ADS)
BOERTJENS, G. J.; VAN HORSSEN, W. T.
2000-08-01
In this paper an initial-boundary value problem for the vertical displacement of a weakly non-linear elastic beam with an harmonic excitation in the horizontal direction at the ends of the beam is studied. The initial-boundary value problem can be regarded as a simple model describing oscillations of flexible structures like suspension bridges or iced overhead transmission lines. Using a two-time-scales perturbation method an approximation of the solution of the initial-boundary value problem is constructed. Interactions between different oscillation modes of the beam are studied. It is shown that for certain external excitations, depending on the phase of an oscillation mode, the amplitude of specific oscillation modes changes.
Machining Chatter Analysis for High Speed Milling Operations
NASA Astrophysics Data System (ADS)
Sekar, M.; Kantharaj, I.; Amit Siddhappa, Savale
2017-10-01
Chatter in high speed milling is characterized by time delay differential equations (DDE). Since closed form solution exists only for simple cases, the governing non-linear DDEs of chatter problems are solved by various numerical methods. Custom codes to solve DDEs are tedious to build, implement and not error free and robust. On the other hand, software packages provide solution to DDEs, however they are not straight forward to implement. In this paper an easy way to solve DDE of chatter in milling is proposed and implemented with MATLAB. Time domain solution permits the study and model of non-linear effects of chatter vibration with ease. Time domain results are presented for various stable and unstable conditions of cut and compared with stability lobe diagrams.
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
Designing with non-linear viscoelastic fluids
NASA Astrophysics Data System (ADS)
Schuh, Jonathon; Lee, Yong Hoon; Allison, James; Ewoldt, Randy
2017-11-01
Material design is typically limited to hard materials or simple fluids; however, design with more complex materials can provide ways to enhance performance. Using the Criminale-Ericksen-Filbey (CEF) constitutive model in the thin film lubrication limit, we derive a modified Reynolds Equation (based on asymptotic analysis) that includes shear thinning, first normal stress, and terminal regime viscoelastic effects. This allows for designing non-linear viscoelastic fluids in thin-film creeping flow scenarios, i.e. optimizing the shape of rheological material properties to achieve different design objectives. We solve the modified Reynolds equation using the pseudo-spectral method, and describe a case study in full-film lubricated sliding where optimal fluid properties are identified. These material-agnostic property targets can then guide formulation of complex fluids which may use polymeric, colloidal, or other creative approaches to achieve the desired non-Newtonian properties.
Finite difference modelling of the temperature rise in non-linear medical ultrasound fields.
Divall, S A; Humphrey, V F
2000-03-01
Non-linear propagation of ultrasound can lead to increased heat generation in medical diagnostic imaging due to the preferential absorption of harmonics of the original frequency. A numerical model has been developed and tested that is capable of predicting the temperature rise due to a high amplitude ultrasound field. The acoustic field is modelled using a numerical solution to the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, known as the Bergen Code, which is implemented in cylindrical symmetric form. A finite difference representation of the thermal equations is used to calculate the resulting temperature rises. The model allows for the inclusion of a number of layers of tissue with different acoustic and thermal properties and accounts for the effects of non-linear propagation, direct heating by the transducer, thermal diffusion and perfusion in different tissues. The effect of temperature-dependent skin perfusion and variation in background temperature between the skin and deeper layers of the body are included. The model has been tested against analytic solutions for simple configurations and then used to estimate temperature rises in realistic obstetric situations. A pulsed 3 MHz transducer operating with an average acoustic power of 200 mW leads to a maximum steady state temperature rise inside the foetus of 1.25 degrees C compared with a 0.6 degree C rise for the same transmitted power under linear propagation conditions. The largest temperature rise occurs at the skin surface, with the temperature rise at the foetus limited to less than 2 degrees C for the range of conditions considered.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
Action Centered Contextual Bandits.
Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan
2017-12-01
Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.
Visual Detection Under Uncertainty Operates Via an Early Static, Not Late Dynamic, Non-Linearity
Neri, Peter
2010-01-01
Signals in the environment are rarely specified exactly: our visual system may know what to look for (e.g., a specific face), but not its exact configuration (e.g., where in the room, or in what orientation). Uncertainty, and the ability to deal with it, is a fundamental aspect of visual processing. The MAX model is the current gold standard for describing how human vision handles uncertainty: of all possible configurations for the signal, the observer chooses the one corresponding to the template associated with the largest response. We propose an alternative model in which the MAX operation, which is a dynamic non-linearity (depends on multiple inputs from several stimulus locations) and happens after the input stimulus has been matched to the possible templates, is replaced by an early static non-linearity (depends only on one input corresponding to one stimulus location) which is applied before template matching. By exploiting an integrated set of analytical and experimental tools, we show that this model is able to account for a number of empirical observations otherwise unaccounted for by the MAX model, and is more robust with respect to the realistic limitations imposed by the available neural hardware. We then discuss how these results, currently restricted to a simple visual detection task, may extend to a wider range of problems in sensory processing. PMID:21212835
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Correlators in tensor models from character calculus
NASA Astrophysics Data System (ADS)
Mironov, A.; Morozov, A.
2017-11-01
We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz) character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.
NASA Astrophysics Data System (ADS)
Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.
2017-01-01
One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.
[Radiotherapy and chaos theory: the tit bird and the butterfly...].
Denis, F; Letellier, C
2012-09-01
Although the same simple laws govern cancer outcome (cell division repeated again and again), each tumour has a different outcome before as well as after irradiation therapy. The linear-quadratic radiosensitivity model allows an assessment of tumor sensitivity to radiotherapy. This model presents some limitations in clinical practice because it does not take into account the interactions between tumour cells and non-tumoral bystander cells (such as endothelial cells, fibroblasts, immune cells...) that modulate radiosensitivity and tumor growth dynamics. These interactions can lead to non-linear and complex tumor growth which appears to be random but that is not since there is not so many tumors spontaneously regressing. In this paper we propose to develop a deterministic approach for tumour growth dynamics using chaos theory. Various characteristics of cancer dynamics and tumor radiosensitivity can be explained using mathematical models of competing cell species. Copyright © 2012 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.; ...
2017-07-26
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for models of electrolyte solution. In this paper, we provide definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation and isolate the effects of charge and cavitation, comparing tomore » the Born (linear response) model. We show that using uncorrected Ewald summation leads to unphysical values for the single ion solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. Finally, this suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for models of electrolyte solution. In this paper, we provide definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation and isolate the effects of charge and cavitation, comparing tomore » the Born (linear response) model. We show that using uncorrected Ewald summation leads to unphysical values for the single ion solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. Finally, this suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation.« less
Salonen, K; Leisola, M; Eerikäinen, T
2009-01-01
Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.
NASA Astrophysics Data System (ADS)
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.; Mundy, Chistopher J.
2017-10-01
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for models of electrolyte solution. Here, we provide definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation and isolate the effects of charge and cavitation, comparing to the Born (linear response) model. We show that using uncorrected Ewald summation leads to unphysical values for the single ion solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. This suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation.
Using complexity metrics with R-R intervals and BPM heart rate measures.
Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie
2013-01-01
Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics-fractal (DFA) and recurrence (RQA) analyses-reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, "oversampled" BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics.
Using complexity metrics with R-R intervals and BPM heart rate measures
Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie
2013-01-01
Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics—fractal (DFA) and recurrence (RQA) analyses—reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, “oversampled” BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics. PMID:23964244
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.
2016-08-17
The S-matrix of a quantum field theory is unchanged by field redefinitions, and so it only depends on geometric quantities such as the curvature of field space. Whether the Higgs multiplet transforms linearly or non-linearly under electroweak symmetry is a subtle question since one can make a coordinate change to convert a field that transforms linearly into one that transforms non-linearly. Renormalizability of the Standard Model (SM) does not depend on the choice of scalar fields or whether the scalar fields transform linearly or non-linearly under the gauge group, but only on the geometric requirement that the scalar field manifoldmore » M is flat. Standard Model Effective Field Theory (SMEFT) and Higgs Effective Field Theory (HEFT) have curved M, since they parametrize deviations from the flat SM case. We show that the HEFT Lagrangian can be written in SMEFT form if and only ifMhas a SU(2) L U(1) Y invariant fixed point. Experimental observables in HEFT depend on local geometric invariants of M such as sectional curvatures, which are of order 1/Λ 2 , where Λ is the EFT scale. We give explicit expressions for these quantities in terms of the structure constants for a general G → H symmetry breaking pattern. The one-loop radiative correction in HEFT is determined using a covariant expansion which preserves manifest invariance of M under coordinate redefinitions. The formula for the radiative correction is simple when written in terms of the curvature of M and the gauge curvature field strengths. We also extend the CCWZ formalism to non-compact groups, and generalize the HEFT curvature computation to the case of multiple singlet scalar fields.« less
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Population response to climate change: linear vs. non-linear modeling approaches.
Ellis, Alicia M; Post, Eric
2004-03-31
Research on the ecological consequences of global climate change has elicited a growing interest in the use of time series analysis to investigate population dynamics in a changing climate. Here, we compare linear and non-linear models describing the contribution of climate to the density fluctuations of the population of wolves on Isle Royale, Michigan from 1959 to 1999. The non-linear self excitatory threshold autoregressive (SETAR) model revealed that, due to differences in the strength and nature of density dependence, relatively small and large populations may be differentially affected by future changes in climate. Both linear and non-linear models predict a decrease in the population of wolves with predicted changes in climate. Because specific predictions differed between linear and non-linear models, our study highlights the importance of using non-linear methods that allow the detection of non-linearity in the strength and nature of density dependence. Failure to adopt a non-linear approach to modelling population response to climate change, either exclusively or in addition to linear approaches, may compromise efforts to quantify ecological consequences of future warming.
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
Remontet, Laurent; Uhry, Zoé; Bossard, Nadine; Iwaz, Jean; Belot, Aurélien; Danieli, Coraline; Charvat, Hadrien; Roche, Laurent
2018-01-01
Cancer survival trend analyses are essential to describe accurately the way medical practices impact patients' survival according to the year of diagnosis. To this end, survival models should be able to account simultaneously for non-linear and non-proportional effects and for complex interactions between continuous variables. However, in the statistical literature, there is no consensus yet on how to build such models that should be flexible but still provide smooth estimates of survival. In this article, we tackle this challenge by smoothing the complex hypersurface (time since diagnosis, age at diagnosis, year of diagnosis, and mortality hazard) using a multidimensional penalized spline built from the tensor product of the marginal bases of time, age, and year. Considering this penalized survival model as a Poisson model, we assess the performance of this approach in estimating the net survival with a comprehensive simulation study that reflects simple and complex realistic survival trends. The bias was generally small and the root mean squared error was good and often similar to that of the true model that generated the data. This parametric approach offers many advantages and interesting prospects (such as forecasting) that make it an attractive and efficient tool for survival trend analyses.
Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony
2018-01-01
This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.
A simple nonlinear model for the return to isotropy in turbulence
NASA Technical Reports Server (NTRS)
Sarkar, Sutanu; Speziale, Charles G.
1989-01-01
A quadratic nonlinear generalization of the linear Rotta model for the slow pressure-strain correlation of turbulence is developed. The model is shown to satisfy realizability and to give rise to no stable non-trivial equilibrium solutions for the anisotropy tensor in the case of vanishing mean velocity gradients. The absence of stable non-trivial equilibrium solutions is a necessary condition to ensure that the model predicts a return to isotropy for all relaxational turbulent flows. Both the phase space dynamics and the temporal behavior of the model are examined and compared against experimental data for the return to isotropy problem. It is demonstrated that the quadratic model successfully captures the experimental trends which clearly exhibit nonlinear behavior. Direct comparisons are also made with the predictions of the Rotta model and the Lumley model.
Howley, Donna; Howley, Peter; Oxenham, Marc F
2018-06-01
Stature and a further 8 anthropometric dimensions were recorded from the arms and hands of a sample of 96 staff and students from the Australian National University and The University of Newcastle, Australia. These dimensions were used to create simple and multiple logistic regression models for sex estimation and simple and multiple linear regression equations for stature estimation of a contemporary Australian population. Overall sex classification accuracies using the models created were comparable to similar studies. The stature estimation models achieved standard errors of estimates (SEE) which were comparable to and in many cases lower than those achieved in similar research. Generic, non sex-specific models achieved similar SEEs and R 2 values to the sex-specific models indicating stature may be accurately estimated when sex is unknown. Copyright © 2018 Elsevier B.V. All rights reserved.
A simple smoothness indicator for the WENO scheme with adaptive order
NASA Astrophysics Data System (ADS)
Huang, Cong; Chen, Li Li
2018-01-01
The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho
2007-03-01
The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.
NASA Astrophysics Data System (ADS)
Czerwiński, Andrzej; Łuczko, Jan
2018-01-01
The paper summarises the experimental investigations and numerical simulations of non-planar parametric vibrations of a statically deformed pipe. Underpinning the theoretical analysis is a 3D dynamic model of curved pipe. The pipe motion is governed by four non-linear partial differential equations with periodically varying coefficients. The Galerkin method was applied, the shape function being that governing the beam's natural vibrations. Experiments were conducted in the range of simple and combination parametric resonances, evidencing the possibility of in-plane and out-of-plane vibrations as well as fully non-planar vibrations in the combination resonance range. It is demonstrated that sub-harmonic and quasi-periodic vibrations are likely to be excited. The method suggested allows the spatial modes to be determined basing on results registered at selected points in the pipe. Results are summarised in the form of time histories, phase trajectory plots and spectral diagrams. Dedicated video materials give us a better insight into the investigated phenomena.
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
Dynamic modeling and ascent flight control of Ares-I Crew Launch Vehicle
NASA Astrophysics Data System (ADS)
Du, Wei
This research focuses on dynamic modeling and ascent flight control of large flexible launch vehicles such as the Ares-I Crew Launch Vehicle (CLV). A complete set of six-degrees-of-freedom dynamic models of the Ares-I, incorporating its propulsion, aerodynamics, guidance and control, and structural flexibility, is developed. NASA's Ares-I reference model and the SAVANT Simulink-based program are utilized to develop a Matlab-based simulation and linearization tool for an independent validation of the performance and stability of the ascent flight control system of large flexible launch vehicles. A linearized state-space model as well as a non-minimum-phase transfer function model (which is typical for flexible vehicles with non-collocated actuators and sensors) are validated for ascent flight control design and analysis. This research also investigates fundamental principles of flight control analysis and design for launch vehicles, in particular the classical "drift-minimum" and "load-minimum" control principles. It is shown that an additional feedback of angle-of-attack can significantly improve overall performance and stability, especially in the presence of unexpected large wind disturbances. For a typical "non-collocated actuator and sensor" control problem for large flexible launch vehicles, non-minimum-phase filtering of "unstably interacting" bending modes is also shown to be effective. The uncertainty model of a flexible launch vehicle is derived. The robust stability of an ascent flight control system design, which directly controls the inertial attitude-error quaternion and also employs the non-minimum-phase filters, is verified by the framework of structured singular value (mu) analysis. Furthermore, nonlinear coupled dynamic simulation results are presented for a reference model of the Ares-I CLV as another validation of the feasibility of the ascent flight control system design. Another important issue for a single main engine launch vehicle is stability under mal-function of the roll control system. The roll motion of the Ares-I Crew Launch Vehicle under nominal flight conditions is actively stabilized by its roll control system employing thrusters. This dissertation describes the ascent flight control design problem of Ares-I in the event of disabled or failed roll control. A simple pitch/yaw control logic is developed for such a technically challenging problem by exploiting the inherent versatility of a quaternion-based attitude control system. The proposed scheme requires only the desired inertial attitude quaternion to be re-computed using the actual uncontrolled roll angle information to achieve an ascent flight trajectory identical to the nominal flight case with active roll control. Another approach that utilizes a simple adjustment of the proportional-derivative gains of the quaternion-based flight control system without active roll control is also presented. This approach doesn't require the re-computation of desired inertial attitude quaternion. A linear stability criterion is developed for proper adjustments of attitude and rate gains. The linear stability analysis results are validated by nonlinear simulations of the ascent flight phase. However, the first approach, requiring a simple modification of the desired attitude quaternion, is recommended for the Ares-I as well as other launch vehicles in the event of no active roll control. Finally, the method derived to stabilize a large flexible launch vehicle in the event of uncontrolled roll drift is generalized as a modified attitude quaternion feedback law. It is used to stabilize an axisymmetric rigid body by two independent control torques.
Linear Models for Systematics and Nuisances
NASA Astrophysics Data System (ADS)
Luger, Rodrigo; Foreman-Mackey, Daniel; Hogg, David W.
2017-12-01
The target of many astronomical studies is the recovery of tiny astrophysical signals living in a sea of uninteresting (but usually dominant) noise. In many contexts (i.e., stellar time-series, or high-contrast imaging, or stellar spectroscopy), there are structured components in this noise caused by systematic effects in the astronomical source, the atmosphere, the telescope, or the detector. More often than not, evaluation of the true physical model for these nuisances is computationally intractable and dependent on too many (unknown) parameters to allow rigorous probabilistic inference. Sometimes, housekeeping data---and often the science data themselves---can be used as predictors of the systematic noise. Linear combinations of simple functions of these predictors are often used as computationally tractable models that can capture the nuisances. These models can be used to fit and subtract systematics prior to investigation of the signals of interest, or they can be used in a simultaneous fit of the systematics and the signals. In this Note, we show that if a Gaussian prior is placed on the weights of the linear components, the weights can be marginalized out with an operation in pure linear algebra, which can (often) be made fast. We illustrate this model by demonstrating the applicability of a linear model for the non-linear systematics in K2 time-series data, where the dominant noise source for many stars is spacecraft motion and variability.
Feedback control of an electrorheological long-stroke vibration damper
NASA Astrophysics Data System (ADS)
Sims, Neil D.; Stanway, Roger; Johnson, Andrew R.; Peel, David J.; Bullough, William A.
1999-06-01
It is widely acknowledged that the inherent non-linearity of smart fluid dampers is inhibiting the development of effective control regimes, and mass-production devices. In an earlier publication, an innovative solution to this problem was presented -- using a simple feedback control strategy to linearize the response. The study used a quasi-steady model of a long-stroke Electrorheological damper, and showed how proportional feedback control could linearize the simulated response. However, this initial research did not consider the dynamics of the damper's behavior, and so the development of a more advanced model has been necessary. In this article, the authors present an extension to this earlier study, using a model of the damper's response that is capable of accurately predicting the dynamic response of the damper. To introduce the topic, the electrorheological long-stroke damper test rig is described, and an overview of the earlier study is given. The advanced model is then derived, and its predictions are compared to experimental data from the test rig. This model is then incorporated into the feedback control simulations, and it is shown how the control strategy is still able to linearize the response in simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adcock, T. A. A.; Taylor, P. H.
2016-01-15
The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest whichmore » leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum.« less
Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1999-01-01
A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Linear Legendrian curves in T(3)
NASA Astrophysics Data System (ADS)
Ghiggini, Paolo
2006-05-01
Using convex surfaces and Kanda's classification theorem, we classify Legendrian isotopy classes of Legendrian linear curves in all tight contact structures on T(3) . Some of the knot types considered in this paper provide new examples of non transversally simple knot types.
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.
2004-01-01
This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov
NASA Astrophysics Data System (ADS)
Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel
2014-07-01
A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.
Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel
2014-07-21
A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.
Integral method for transient He II heat transfer in a semi-infinite domain
NASA Astrophysics Data System (ADS)
Baudouy, B.
2002-05-01
Integral methods are suited to solve a non-linear system of differential equations where the non-linearity can be found either in the differential equations or in the boundary conditions. Though they are approximate methods, they have proven to give simple solutions with acceptable accuracy for transient heat transfer in He II. Taking in account the temperature dependence of thermal properties, direct solutions are found without the need of adjusting a parameter. Previously, we have presented a solution for the clamped heat flux and in the present study this method is used to accommodate the clamped-temperature problem. In the case of constant thermal properties, this method yields results that are within a few percent of the exact solution for the heat flux at the axis origin. We applied this solution to analyze recovery from burnout and find an agreement within 10% at low heat flux, whereas at high heat flux the model deviates from the experimental data suggesting the need for a more refined thermal model.
A Family of Ellipse Methods for Solving Non-Linear Equations
ERIC Educational Resources Information Center
Gupta, K. C.; Kanwar, V.; Kumar, Sanjeev
2009-01-01
This note presents a method for the numerical approximation of simple zeros of a non-linear equation in one variable. In order to do so, the method uses an ellipse rather than a tangent approach. The main advantage of our method is that it does not fail even if the derivative of the function is either zero or very small in the vicinity of the…
Dynamic Characteristics of Simple Cylindrical Hydraulic Engine Mount Utilizing Air Compressibility
NASA Astrophysics Data System (ADS)
Nakahara, Kazunari; Nakagawa, Noritoshi; Ohta, Katsutoshi
A cylindrical hydraulic engine mount with simple construction has been developed. This engine mount has a sub chamber formed by utilizing air compressibility without a diaphragm. A mathematical model of the mount is presented to predict non-linear dynamic characteristics in consideration of the effect of the excitation amplitude on the storage stiffness and loss factor. The mathematical model predicts experimental results well for the frequency responses of the storage stiffness and loss factor over the frequency range of 5 Hz to 60Hz. The effect of air volume and internal pressure on the dynamic characteristics is clarified by the analysis and dynamic characterization testing. The effectiveness of the cylindrical hydraulic engine mount on the reduction of engine shake is demonstrated for riding comfort through on-vehicle testing with a chassis dynamometer.
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
The role of pulvinar in the transmission of information in the visual hierarchy.
Cortes, Nelson; van Vreeswijk, Carl
2012-01-01
VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning.
The Role of Pulvinar in the Transmission of Information in the Visual Hierarchy
Cortes, Nelson; van Vreeswijk, Carl
2012-01-01
Visual receptive field (RF) attributes in visual cortex of primates have been explained mainly from cortical connections: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning. PMID:22654750
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman
2016-04-01
The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.
NASA Astrophysics Data System (ADS)
Sellers, Piers J.; Heiser, Mark D.; Hall, Forrest G.; Verma, Shashi B.; Desjardins, Raymond L.; Schuepp, Peter M.; Ian MacPherson, J.
1997-03-01
It is commonly assumed that biophysically based soil-vegetation-atmosphere transfer (SVAT) models are scale-invariant with respect to the initial boundary conditions of topography, vegetation condition and soil moisture. In practice, SVAT models that have been developed and tested at the local scale (a few meters or a few tens of meters) are applied almost unmodified within general circulation models (GCMs) of the atmosphere, which have grid areas of 50-500 km 2. This study, which draws much of its substantive material from the papers of Sellers et al. (1992c, J. Geophys. Res., 97(D17): 19033-19060) and Sellers et al. (1995, J. Geophys. Res., 100(D12): 25607-25629), explores the validity of doing this. The work makes use of the FIFE-89 data set which was collected over a 2 km × 15 km grassland area in Kansas. The site was characterized by high variability in soil moisture and vegetation condition during the late growing season of 1989. The area also has moderate topography. The 2 km × 15 km 'testbed' area was divided into 68 × 501 pixels of 30 m × 30 m spatial resolution, each of which could be assigned topographic, vegetation condition and soil moisture parameters from satellite and in situ observations gathered in FIFE-89. One or more of these surface fields was area-averaged in a series of simulation runs to determine the impact of using large-area means of these initial or boundary conditions on the area-integrated (aggregated) surface fluxes. The results of the study can be summarized as follows: 1. analyses and some of the simulations indicated that the relationships describing the effects of moderate topography on the surface radiation budget are near-linear and thus largely scale-invariant. The relationships linking the simple ratio vegetation index ( SR), the canopy conductance parameter (▽ F) and the canopy transpiration flux are also near-linear and similarly scale-invariant to first order. Because of this, it appears that simple area-averaging operations can be applied to these fields with relatively little impact on the calculated surface heat flux. 2. The relationships linking surface and root-zone soil wetness to the soil surface and canopy transpiration rates are non-linear. However, simulation results and observations indicate that soil moisture variability decreases significantly as an area dries out, which partially cancels out the effects of these non-linear functions.In conclusion, it appears that simple averages of topographic slope and vegetation parameters can be used to calculate surface energy and heat fluxes over a wide range of spatial scales, from a few meters up to many kilometers at least for grassland sites and areas with moderate topography. Although the relationships between soil moisture and evapotranspiration are non-linear for intermediate soil wetnesses, the dynamics of soil drying act to progressively reduce soil moisture variability and thus the impacts of these non-linearities on the area-averaged surface fluxes. These findings indicate that we may be able to use mean values of topography, vegetation condition and soil moisture to calculate the surface-atmosphere fluxes of energy, heat and moisture at larger length scales, to within an acceptable accuracy for climate modeling work. However, further tests over areas with different vegetation types, soils and more extreme topography are required to improve our confidence in this approach.
Surface and Atmospheric Parameter Retrieval From AVIRIS Data: The Importance of Non-Linear Effects
NASA Technical Reports Server (NTRS)
Green Robert O.; Moreno, Jose F.
1996-01-01
AVIRIS data represent a new and important approach for the retrieval of atmospheric and surface parameters from optical remote sensing data. Not only as a test for future space systems, but also as an operational airborne remote sensing system, the development of algorithms to retrieve information from AVIRIS data is an important step to these new approaches and capabilities. Many things have been learned since AVIRIS became operational, and the successive technical improvements in the hardware and the more sophisticated calibration techniques employed have increased the quality of the data to the point of almost meeting optimum user requirements. However, the potential capabilities of imaging spectrometry over the standard multispectral techniques have still not been fully demonstrated. Reasons for this are the technical difficulties in handling the data, the critical aspect of calibration for advanced retrieval methods, and the lack of proper models with which to invert the measured AVIRIS radiances in all the spectral channels. To achieve the potential of imaging spectrometry, these issues must be addressed. In this paper, an algorithm to retrieve information about both atmospheric and surface parameters from AVIRIS data, by using model inversion techniques, is described. Emphasis is put on the derivation of the model itself as well as proper inversion techniques, robust to noise in the data and an inadequate ability of the model to describe natural variability in the data. The problem of non-linear effects is addressed, as it has been demonstrated to be a major source of error in the numerical values retrieved by more simple, linear-based approaches. Non-linear effects are especially critical for the retrieval of surface parameters where both scattering and absorption effects are coupled, as well as in the cases of significant multiple-scattering contributions. However, sophisticated modeling approaches can handle such non-linear effects, which are especially important over vegetated surfaces. All the data used in this study were acquired during the 1991 Multisensor Airborne Campaign (MAC-Europe), as part of the European Field Experiment on a Desertification-threatened Area (EFEDA), carried out in Spain in June-July 1991.
Tree cover bimodality in savannas and forests emerging from the switching between two fire dynamics.
De Michele, Carlo; Accatino, Francesco
2014-01-01
Moist savannas and tropical forests share the same climatic conditions and occur side by side. Experimental evidences show that the tree cover of these ecosystems exhibits a bimodal frequency distribution. This is considered as a proof of savanna-forest bistability, predicted by dynamic vegetation models based on non-linear differential equations. Here, we propose a change of perspective about the bimodality of tree cover distribution. We show, using a simple matrix model of tree dynamics, how the bimodality of tree cover can emerge from the switching between two linear dynamics of trees, one in presence and one in absence of fire, with a feedback between fire and trees. As consequence, we find that the transitions between moist savannas and tropical forests, if sharp, are not necessarily catastrophic.
Kumar, K Vasanth
2007-04-02
Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Mixing and evaporation processes in an inverse estuary inferred from δ2H and δ18O
NASA Astrophysics Data System (ADS)
Corlis, Nicholas J.; Herbert Veeh, H.; Dighton, John C.; Herczeg, Andrew L.
2003-05-01
We have measured δ2H and δ18O in Spencer Gulf, South Australia, an inverse estuary with a salinity gradient from 36‰ near its entrance to about 45‰ at its head. We show that a simple evaporation model of seawater under ambient conditions, aided by its long residence time in Spencer Gulf, can account for the major features of the non-linear distribution pattern of δ2H with respect to salinity, at least in the restricted part of the gulf. In the more exposed part of the gulf, the δ/ S pattern appears to be governed primarily by mixing processes between inflowing shelf water and outflowing high salinity gulf water. These data provide direct support for the oceanographic model of Spencer Gulf previously proposed by other workers. Although the observed δ/ S relationship here is non-linear and hence in notable contrast to the linear δ/ S relationship in the Red Sea, the slopes of δ2H vs. δ18O are comparable, indicating that the isotopic enrichments in both marginal seas are governed by similar climatic conditions with evaporation exceeding precipitation.
Receptors as a master key for synchronization of rhythms
NASA Astrophysics Data System (ADS)
Nagano, Seido
2004-03-01
A simple, but general scheme to achieve synchronization of rhythms was derived. The scheme has been inductively generalized from the modelling study of cellular slime mold. It was clarified that biological receptors work as apparatuses that can convert external stimulus to the form of nonlinear interaction within individual oscillators. Namely, the mathematical model receptor works as a nonlinear coupling apparatus between nonlinear oscillators. Thus, synchronization is achieved as a result of competition between two kinds of non-linearities, and to achieve synchronization, even a small external stimulation via model receptors can change the characteristics of individual oscillators significantly. The derived scheme is very simple mathematically, but it is a very powerful scheme as numerically demonstrated. The biological receptor scheme should significantly help understanding of synchronization phenomena in biology since groups of limit cycle oscillators and receptors are ubiquitous in biological systems. Reference: S. Nagano, Phys Rev. E67, 056215(2003)
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI
NASA Astrophysics Data System (ADS)
Nunes, Daniel; Cruz, Tomás L.; Jespersen, Sune N.; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI.
Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.
Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces
Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.
2013-01-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657
Toward a model-based predictive controller design in brain-computer interfaces.
Kamrunnahar, M; Dias, N S; Schiff, S J
2011-05-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
Nonlinear resonances in linear segmented Paul trap of short central segment.
Kłosowski, Łukasz; Piwiński, Mariusz; Pleskacz, Katarzyna; Wójtewicz, Szymon; Lisak, Daniel
2018-03-23
Linear segmented Paul trap system has been prepared for ion mass spectroscopy experiments. A non-standard approach to stability of trapped ions is applied to explain some effects observed with ensembles of calcium ions. Trap's stability diagram is extended to 3-dimensional one using additional ∆a besides standard q and a stability parameters. Nonlinear resonances in (q,∆a) diagrams are observed and described with a proposed model. The resonance lines have been identified using simple simulations and comparing the numerical and experimental results. The phenomenon can be applied in electron-impact ionization experiments for mass-identification of obtained ions or purification of their ensembles. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Good, Peter; Andrews, Timothy; Chadwick, Robin; Dufresne, Jean-Louis; Gregory, Jonathan M.; Lowe, Jason A.; Schaller, Nathalie; Shiogama, Hideo
2016-11-01
nonlinMIP provides experiments that account for state-dependent regional and global climate responses. The experiments have two main applications: (1) to focus understanding of responses to CO2 forcing on states relevant to specific policy or scientific questions (e.g. change under low-forcing scenarios, the benefits of mitigation, or from past cold climates to the present day), or (2) to understand the state dependence (non-linearity) of climate change - i.e. why doubling the forcing may not double the response. State dependence (non-linearity) of responses can be large at regional scales, with important implications for understanding mechanisms and for general circulation model (GCM) emulation techniques (e.g. energy balance models and pattern-scaling methods). However, these processes are hard to explore using traditional experiments, which explains why they have had so little attention in previous studies. Some single model studies have established novel analysis principles and some physical mechanisms. There is now a need to explore robustness and uncertainty in such mechanisms across a range of models (point 2 above), and, more broadly, to focus work on understanding the response to CO2 on climate states relevant to specific policy/science questions (point 1). nonlinMIP addresses this using a simple, small set of CO2-forced experiments that are able to separate linear and non-linear mechanisms cleanly, with a good signal-to-noise ratio - while being demonstrably traceable to realistic transient scenarios. The design builds on the CMIP5 (Coupled Model Intercomparison Project Phase 5) and CMIP6 DECK (Diagnostic, Evaluation and Characterization of Klima) protocols, and is centred around a suite of instantaneous atmospheric CO2 change experiments, with a ramp-up-ramp-down experiment to test traceability to gradual forcing scenarios. In all cases the models are intended to be used with CO2 concentrations rather than CO2 emissions as the input. The understanding gained will help interpret the spread in policy-relevant scenario projections. Here we outline the basic physical principles behind nonlinMIP, and the method of establishing traceability from abruptCO2 to gradual forcing experiments, before detailing the experimental design, and finally some analysis principles. The test of traceability from abruptCO2 to transient experiments is recommended as a standard analysis within the CMIP5 and CMIP6 DECK protocols.
Light and short arc rubs in rotating machines: Experimental tests and modelling
NASA Astrophysics Data System (ADS)
Pennacchi, P.; Bachschmid, N.; Tanzi, E.
2009-10-01
Rotor-to-stator rub is a non-linear phenomenon which has been analyzed many times in rotordynamics literature, but very often these studies are devoted simply to highlight non-linearities, using very simple rotors, rather than to present reliable models. However, rotor-to-stator rub is actually one of the most common faults during the operation of rotating machinery. The frequency of its occurrence is increasing due to the trend of reducing the radial clearance between the seal and the rotor in modern turbine units, pumps and compressors in order to increase efficiency. Often the rub occurs between rotor and seals and the analysis of the phenomenon cannot set aside the consideration of the different relative stiffness. This paper presents some experimental results obtained by means of a test rig in which rub conditions of real machines are reproduced. In particular short arc rubs are considered and the shaft is stiffer than the obstacle. Then a model, suitable to be employed for real rotating machinery, is presented and the simulations obtained are compared with the experimental results. The model is able to reproduce the behaviour of the test rig.
Turbulent shear layers in confining channels
NASA Astrophysics Data System (ADS)
Benham, Graham P.; Castrejon-Pita, Alfonso A.; Hewitt, Ian J.; Please, Colin P.; Style, Rob W.; Bird, Paul A. D.
2018-06-01
We present a simple model for the development of shear layers between parallel flows in confining channels. Such flows are important across a wide range of topics from diffusers, nozzles and ducts to urban air flow and geophysical fluid dynamics. The model approximates the flow in the shear layer as a linear profile separating uniform-velocity streams. Both the channel geometry and wall drag affect the development of the flow. The model shows good agreement with both particle image velocimetry experiments and computational turbulence modelling. The simplicity and low computational cost of the model allows it to be used for benchmark predictions and design purposes, which we demonstrate by investigating optimal pressure recovery in diffusers with non-uniform inflow.
Chaos theory for clinical manifestations in multiple sclerosis.
Akaishi, Tetsuya; Takahashi, Toshiyuki; Nakashima, Ichiro
2018-06-01
Multiple sclerosis (MS) is a demyelinating disease which characteristically shows repeated relapses and remissions irregularly in the central nervous system. At present, the pathological mechanism of MS is unknown and we do not have any theories or mathematical models to explain its disseminated patterns in time and space. In this paper, we present a new theoretical model from a viewpoint of complex system with chaos model to reproduce and explain the non-linear clinical and pathological manifestations in MS. First, we adopted a discrete logistic equation with non-linear dynamics to prepare a scalar quantity for the strength of pathogenic factor at a specific location of the central nervous system at a specific time to reflect the negative feedback in immunity. Then, we set distinct minimum thresholds in the above-mentioned scalar quantity for demyelination possibly causing clinical relapses and for cerebral atrophy. With this simple model, we could theoretically reproduce all the subtypes of relapsing-remitting MS, primary progressive MS, and secondary progressive MS. With the sensitivity to initial conditions and sensitivity to minute change in parameters of the chaos theory, we could also reproduce the spatial dissemination. Such chaotic behavior could be reproduced with other similar upward-convex functions with appropriate set of initial conditions and parameters. In conclusion, by applying chaos theory to the three-dimensional scalar field of the central nervous system, we can reproduce the non-linear outcome of the clinical course and explain the unsolved disseminations in time and space of the MS patients. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
Non-Linear Cosmological Power Spectra in Real and Redshift Space
NASA Technical Reports Server (NTRS)
Taylor, A. N.; Hamilton, A. J. S.
1996-01-01
We present an expression for the non-linear evolution of the cosmological power spectrum based on Lagrangian trajectories. This is simplified using the Zel'dovich approximation to trace particle displacements, assuming Gaussian initial conditions. The model is found to exhibit the transfer of power from large to small scales expected in self-gravitating fields. Some exact solutions are found for power-law initial spectra. We have extended this analysis into red-shift space and found a solution for the non-linear, anisotropic redshift-space power spectrum in the limit of plane-parallel redshift distortions. The quadrupole-to-monopole ratio is calculated for the case of power-law initial spectra. We find that the shape of this ratio depends on the shape of the initial spectrum, but when scaled to linear theory depends only weakly on the redshift-space distortion parameter, beta. The point of zero-crossing of the quadrupole, kappa(sub o), is found to obey a simple scaling relation and we calculate this scale in the Zel'dovich approximation. This model is found to be in good agreement with a series of N-body simulations on scales down to the zero-crossing of the quadrupole, although the wavenumber at zero-crossing is underestimated. These results are applied to the quadrupole-to-monopole ratio found in the merged QDOT plus 1.2-Jy-IRAS redshift survey. Using a likelihood technique we have estimated that the distortion parameter is constrained to be beta greater than 0.5 at the 95 percent level. Our results are fairly insensitive to the local primordial spectral slope, but the likelihood analysis suggests n = -2 un the translinear regime. The zero-crossing scale of the quadrupole is k(sub 0) = 0.5 +/- 0.1 h Mpc(exp -1) and from this we infer that the amplitude of clustering is sigma(sub 8) = 0.7 +/- 0.05. We suggest that the success of this model is due to non-linear redshift-space effects arising from infall on to caustic and is not dominated by virialized cluster cores. The latter should start to dominate on scales below the zero-crossing of the quadrupole, where our model breaks down.
Magnetic suppression of turbulence and the star formation activity of molecular clouds
NASA Astrophysics Data System (ADS)
Zamora-Avilés, Manuel; Vázquez-Semadeni, Enrique; Körtgen, Bastian; Banerjee, Robi; Hartmann, Lee
2018-03-01
We present magnetohydrodynamic simulations aimed at studying the effect of the magnetic suppression of turbulence (generated through various instabilities during the formation of molecular clouds by converging) on the subsequent star formation (SF) activity. We study four magnetically supercritical models with magnetic field strengths B = 0, 1, 2, and 3 μG (corresponding to mass-to-flux ratios of ∞, 4.76, 2.38, and 1.59 times the critical value), with the magnetic field, initially being aligned with the flows. We find that, for increasing magnetic field strength, the clouds formed tend to be more massive, denser, less turbulent, and with higher SF activity. This causes the onset of SF activity in the non-magnetic or more weakly magnetized cases to be delayed by a few Myr in comparison to the more strongly magnetized cases. We attribute this behaviour to the suppression of the non-linear thin shell instability (NTSI) by the magnetic field, previously found by Heitsch and coworkers. This result is contrary to the standard notion that the magnetic field provides support to the clouds, thus reducing their star formation rate. However, our result is a completely non-linear one, and could not be foreseen from simple linear considerations.
Non-rigid Motion Correction in 3D Using Autofocusing with Localized Linear Translations
Cheng, Joseph Y.; Alley, Marcus T.; Cunningham, Charles H.; Vasanawala, Shreyas S.; Pauly, John M.; Lustig, Michael
2012-01-01
MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from non-rigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well-approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric – more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multi-channel navigator data. The novel navigation strategy is based on the so-called “Butterfly” navigators which are modifications to the spin-warp sequence that provide intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, non-rigid motion was observed. PMID:22307933
Ho, Yuh-Shan
2006-01-01
A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.
Cosmological Perturbation Theory and the Spherical Collapse model - I. Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Fosalba, Pablo; Gaztanaga, Enrique
1998-12-01
We present a simple and intuitive approximation for solving the perturbation theory (PT) of small cosmic fluctuations. We consider only the spherically symmetric or monopole contribution to the PT integrals, which yields the exact result for tree-graphs (i.e. at leading order). We find that the non-linear evolution in Lagrangian space is then given by a simple local transformation over the initial conditions, although it is not local in Euler space. This transformation is found to be described by the spherical collapse (SC) dynamics, as it is the exact solution in the shearless (and therefore local) approximation in Lagrangian space. Taking advantage of this property, it is straightforward to derive the one-point cumulants, xi_J, for both the unsmoothed and smoothed density fields to arbitrary order in the perturbative regime. To leading-order this reproduces, and provides us with a simple explanation for, the exact results obtained by Bernardeau. We then show that the SC model leads to accurate estimates for the next corrective terms when compared with the results derived in the exact perturbation theory making use of the loop calculations. The agreement is within a few per cent for the hierarchical ratios S_J=xi_J/xi^J-1_2. We compare our analytic results with N-body simulations, which turn out to be in very good agreement up to scales where sigma~1. A similar treatment is presented to estimate higher order corrections in the Zel'dovich approximation. These results represent a powerful and readily usable tool to produce analytical predictions that describe the gravitational clustering of large-scale structure in the weakly non-linear regime.
Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories
NASA Technical Reports Server (NTRS)
Burchett, Bradley T.
2003-01-01
The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.
Modelling female fertility traits in beef cattle using linear and non-linear models.
Naya, H; Peñagaricano, F; Urioste, J I
2017-06-01
Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2 < 0.08 and r < 0.13, for linear models; h 2 > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.
Critical Fluctuations in Cortical Models Near Instability
Aburn, Matthew J.; Holmes, C. A.; Roberts, James A.; Boonstra, Tjeerd W.; Breakspear, Michael
2012-01-01
Computational studies often proceed from the premise that cortical dynamics operate in a linearly stable domain, where fluctuations dissipate quickly and show only short memory. Studies of human electroencephalography (EEG), however, have shown significant autocorrelation at time lags on the scale of minutes, indicating the need to consider regimes where non-linearities influence the dynamics. Statistical properties such as increased autocorrelation length, increased variance, power law scaling, and bistable switching have been suggested as generic indicators of the approach to bifurcation in non-linear dynamical systems. We study temporal fluctuations in a widely-employed computational model (the Jansen–Rit model) of cortical activity, examining the statistical signatures that accompany bifurcations. Approaching supercritical Hopf bifurcations through tuning of the background excitatory input, we find a dramatic increase in the autocorrelation length that depends sensitively on the direction in phase space of the input fluctuations and hence on which neuronal subpopulation is stochastically perturbed. Similar dependence on the input direction is found in the distribution of fluctuation size and duration, which show power law scaling that extends over four orders of magnitude at the Hopf bifurcation. We conjecture that the alignment in phase space between the input noise vector and the center manifold of the Hopf bifurcation is directly linked to these changes. These results are consistent with the possibility of statistical indicators of linear instability being detectable in real EEG time series. However, even in a simple cortical model, we find that these indicators may not necessarily be visible even when bifurcations are present because their expression can depend sensitively on the neuronal pathway of incoming fluctuations. PMID:22952464
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
Effect of Critical Displacement Parameter on Slip Regime at Subduction Fault
NASA Astrophysics Data System (ADS)
Muldashev, Iskander; Sobolev, Stephan
2016-04-01
It is widely accepted that for the simple fault models value of critical displacement parameter (Dc) in Ruina-Dietrich's rate-and-state friction law is responsible for the transition from stick-slip regime at low Dc to non-seismic creep regime at large Dc. However, neither the value of "transition" Dc parameter nor the character of the transition is known for the realistic subduction zone setting. Here we investigate effect of Dc on regime of slip at subduction faults for two setups, generic model similar to simple shear elastic slider under quasistatic loading and full subduction model with appropriate geometry, stress and temperature distribution similar to the setting at the site of the Great Chile Earthquake of 1960. In our modeling we use finite element numerical technique that employs non-linear elasto-visco-plastic rheology in the entire model domain with rate-and-state plasticity within the fault zone. The model generates spontaneous earthquake sequence. Adaptive time-step integration procedure varies time step from 40 seconds at instability (earthquake), and gradually increases it to 5 years during postseismic relaxation. The technique allows observing the effect of Dc on period, magnitude of earthquakes through the cycles. We demonstrate that our modeling results for the generic model are consistent with the previous theoretical and numeric modeling results. For the full subduction model we obtain transition from non-seismic creep to stick-slip regime at Dc about 20 cm. We will demonstrate and discuss the features of the transition regimes in both generic and realistic subduction models.
Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule
NASA Technical Reports Server (NTRS)
Bay, Stephen D.; Schwabacher, Mark
2003-01-01
Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.
Rotstein, Horacio G
2014-01-01
We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.
Stabilizing skateboard speed-wobble with reflex delay.
Varszegi, Balazs; Takacs, Denes; Stepan, Gabor; Hogan, S John
2016-08-01
A simple mechanical model of the skateboard-skater system is analysed, in which the effect of human control is considered by means of a linear proportional-derivative (PD) controller with delay. The equations of motion of this non-holonomic system are neutral delay-differential equations. A linear stability analysis of the rectilinear motion is carried out analytically. It is shown how to vary the control gains with respect to the speed of the skateboard to stabilize the uniform motion. The critical reflex delay of the skater is determined as the function of the speed. Based on this analysis, we present an explanation for the linear instability of the skateboard-skater system at high speed. Moreover, the advantages of standing ahead of the centre of the board are demonstrated from the viewpoint of reflex delay and control gain sensitivity. © 2016 The Author(s).
A higher order panel method for linearized supersonic flow
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Epton, M. A.; Johnson, F. T.; Magnus, A. E.; Rubbert, P. E.
1979-01-01
The basic integral equations of linearized supersonic theory for an advanced supersonic panel method are derived. Methods using only linear varying source strength over each panel or only quadratic doublet strength over each panel gave good agreement with analytic solutions over cones and zero thickness cambered wings. For three dimensional bodies and wings of general shape, combined source and doublet panels with interior boundary conditions to eliminate the internal perturbations lead to a stable method providing good agreement experiment. A panel system with all edges contiguous resulted from dividing the basic four point non-planar panel into eight triangular subpanels, and the doublet strength was made continuous at all edges by a quadratic distribution over each subpanel. Superinclined panels were developed and tested on s simple nacelle and on an airplane model having engine inlets, with excellent results.
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Age estimation standards for a Western Australian population using the coronal pulp cavity index.
Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel
2013-09-10
Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
Reynolds stress closure in jet flows using wave models
NASA Technical Reports Server (NTRS)
Morris, Philip J.
1990-01-01
A collection of papers is presented. The outline of this report is as follows. Chapter three contains a description of a weakly nonlinear turbulence model that was developed. An essential part of the application of such a closure scheme to general geometry jets is the solution of the local hydrodynamic stability equation for a given jet cross-section. Chapter four describes the conformal mapping schemes used to map such geometries onto a simple computational domain. Chapter five describes a solution of a stability problem for circular, elliptic, and rectangular geometries. In chapter six linear models for the shock shell structure in non-circular jets is given. The appendices contain reprints of papers also published during this study including the following topics: (1) instability of elliptic jets; (2) a technique for predicting the shock cell structure in non-circular jets using a vortex sheet model; and (3) the resonant interaction between twin supersonic jets.
Cheng, Ryan R; Hawk, Alexander T; Makarov, Dmitrii E
2013-02-21
Recent experiments showed that the reconfiguration dynamics of unfolded proteins are often adequately described by simple polymer models. In particular, the Rouse model with internal friction (RIF) captures internal friction effects as observed in single-molecule fluorescence correlation spectroscopy (FCS) studies of a number of proteins. Here we use RIF, and its non-free draining analog, Zimm model with internal friction, to explore the effect of internal friction on the rate with which intramolecular contacts can be formed within the unfolded chain. Unlike the reconfiguration times inferred from FCS experiments, which depend linearly on the solvent viscosity, the first passage times to form intramolecular contacts are shown to display a more complex viscosity dependence. We further describe scaling relationships obeyed by contact formation times in the limits of high and low internal friction. Our findings provide experimentally testable predictions that can serve as a framework for the analysis of future studies of contact formation in proteins.
Johnston, K M; Gustafson, P; Levy, A R; Grootendorst, P
2008-04-30
A major, often unstated, concern of researchers carrying out epidemiological studies of medical therapy is the potential impact on validity if estimates of treatment are biased due to unmeasured confounders. One technique for obtaining consistent estimates of treatment effects in the presence of unmeasured confounders is instrumental variables analysis (IVA). This technique has been well developed in the econometrics literature and is being increasingly used in epidemiological studies. However, the approach to IVA that is most commonly used in such studies is based on linear models, while many epidemiological applications make use of non-linear models, specifically generalized linear models (GLMs) such as logistic or Poisson regression. Here we present a simple method for applying IVA within the class of GLMs using the generalized method of moments approach. We explore some of the theoretical properties of the method and illustrate its use within both a simulation example and an epidemiological study where unmeasured confounding is suspected to be present. We estimate the effects of beta-blocker therapy on one-year all-cause mortality after an incident hospitalization for heart failure, in the absence of data describing disease severity, which is believed to be a confounder. 2008 John Wiley & Sons, Ltd
Non-linear aeroelastic prediction for aircraft applications
NASA Astrophysics Data System (ADS)
de C. Henshaw, M. J.; Badcock, K. J.; Vio, G. A.; Allen, C. B.; Chamberlain, J.; Kaynes, I.; Dimitriadis, G.; Cooper, J. E.; Woodgate, M. A.; Rampurawala, A. M.; Jones, D.; Fenwick, C.; Gaitonde, A. L.; Taylor, N. V.; Amor, D. S.; Eccles, T. A.; Denley, C. J.
2007-05-01
Current industrial practice for the prediction and analysis of flutter relies heavily on linear methods and this has led to overly conservative design and envelope restrictions for aircraft. Although the methods have served the industry well, it is clear that for a number of reasons the inclusion of non-linearity in the mathematical and computational aeroelastic prediction tools is highly desirable. The increase in available and affordable computational resources, together with major advances in algorithms, mean that non-linear aeroelastic tools are now viable within the aircraft design and qualification environment. The Partnership for Unsteady Methods in Aerodynamics (PUMA) Defence and Aerospace Research Partnership (DARP) was sponsored in 2002 to conduct research into non-linear aeroelastic prediction methods and an academic, industry, and government consortium collaborated to address the following objectives: To develop useable methodologies to model and predict non-linear aeroelastic behaviour of complete aircraft. To evaluate the methodologies on real aircraft problems. To investigate the effect of non-linearities on aeroelastic behaviour and to determine which have the greatest effect on the flutter qualification process. These aims have been very effectively met during the course of the programme and the research outputs include: New methods available to industry for use in the flutter prediction process, together with the appropriate coaching of industry engineers. Interesting results in both linear and non-linear aeroelastics, with comprehensive comparison of methods and approaches for challenging problems. Additional embryonic techniques that, with further research, will further improve aeroelastics capability. This paper describes the methods that have been developed and how they are deployable within the industrial environment. We present a thorough review of the PUMA aeroelastics programme together with a comprehensive review of the relevant research in this domain. This is set within the context of a generic industrial process and the requirements of UK and US aeroelastic qualification. A range of test cases, from simple small DOF cases to full aircraft, have been used to evaluate and validate the non-linear methods developed and to make comparison with the linear methods in everyday use. These have focused mainly on aerodynamic non-linearity, although some results for structural non-linearity are also presented. The challenges associated with time domain (coupled computational fluid dynamics-computational structural model (CFD-CSM)) methods have been addressed through the development of grid movement, fluid-structure coupling, and control surface movement technologies. Conclusions regarding the accuracy and computational cost of these are presented. The computational cost of time-domain methods, despite substantial improvements in efficiency, remains high. However, significant advances have been made in reduced order methods, that allow non-linear behaviour to be modelled, but at a cost comparable with that of the regular linear methods. Of particular note is a method based on Hopf bifurcation that has reached an appropriate maturity for deployment on real aircraft configurations, though only limited results are presented herein. Results are also presented for dynamically linearised CFD approaches that hold out the possibility of non-linear results at a fraction of the cost of time coupled CFD-CSM methods. Local linearisation approaches (higher order harmonic balance and continuation method) are also presented; these have the advantage that no prior assumption of the nature of the aeroelastic instability is required, but currently these methods are limited to low DOF problems and it is thought that these will not reach a level of maturity appropriate to real aircraft problems for some years to come. Nevertheless, guidance on the most likely approaches has been derived and this forms the basis for ongoing research. It is important to recognise that the aeroelastic design and qualification requires a variety of methods applicable at different stages of the process. The methods reported herein are mapped to the process, so that their applicability and complementarity may be understood. Overall, the programme has provided a suite of methods that allow realistic consideration of non-linearity in the aeroelastic design and qualification of aircraft. Deployment of these methods is underway in the industrial environment, but full realisation of the benefit of these approaches will require appropriate engagement with the standards community so that safety standards may take proper account of the inclusion of non-linearity.
Simple linear and multivariate regression models.
Rodríguez del Águila, M M; Benítez-Parejo, N
2011-01-01
In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, E.; Ganti, V. K.; Dietrich, W. E.
2009-12-01
Sediment transport on hillslopes can be thought of as a hopping process, where the sediment moves in a series of jumps. A wide range of processes shape the hillslopes which can move sediment to a large distance in the downslope direction, thus, resulting in a broad-tail in the probability density function (PDF) of hopping lengths. Here, we argue that such a broad-tailed distribution calls for a non-local computation of sediment flux, where the sediment flux is not only a function of local topographic quantities but is an integral flux which takes into account the upslope topographic “memory” of the point of interest. We encapsulate this non-local behavior into a simple fractional diffusive model that involves fractional (non-integer) derivatives. We present theoretical predictions from this nonlocal model and demonstrate a nonlinear dependence of sediment flux on local gradient, consistent with observations. Further, we demonstrate that the non-local model naturally eliminates the scale-dependence exhibited by any local (linear or nonlinear) sediment transport model. An extension to a 2-D framework, where the fractional derivative can be cast into a mixture of directional derivatives, is discussed together with the implications of introducing non-locality into existing landscape evolution models.
NASA Astrophysics Data System (ADS)
Pipkins, Daniel Scott
Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.
Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.
2009-01-01
The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.
''Math in a Can'': Teaching Mathematics and Engineering Design
ERIC Educational Resources Information Center
Narode, Ronald B.
2011-01-01
Using an apparently simple problem, ''Design a cylindrical can that will hold a liter of milk,'' this paper demonstrates how engineering design may facilitate the teaching of the following ideas to secondary students: linear and non-linear relationships; basic geometry of circles, rectangles, and cylinders; unit measures of area and volume;…
Angular-Rate Estimation Using Delayed Quaternion Measurements
NASA Technical Reports Server (NTRS)
Azor, R.; Bar-Itzhack, I. Y.; Harman, R. R.
1999-01-01
This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared one that uses differentiated quaternion measurements to yield coarse rate measurements, which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear part of the rotas rotational dynamics equation of a body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This non unique decomposition, enables the treatment of the nonlinear spacecraft (SC) dynamics model as a linear one and, thus, the application of a PseudoLinear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the gain matrix and thus eliminates the need to compute recursively the filter covariance matrix. The replacement of the rotational dynamics by a simple Markov model is also examined. In this paper special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results are presented.
Kumar, P; Kumar, Dinesh; Rai, K N
2016-08-01
In this article, a non-linear dual-phase-lag (DPL) bio-heat transfer model based on temperature dependent metabolic heat generation rate is derived to analyze the heat transfer phenomena in living tissues during thermal ablation treatment. The numerical solution of the present non-linear problem has been done by finite element Runge-Kutta (4,5) method which combines the essence of Runge-Kutta (4,5) method together with finite difference scheme. Our study demonstrates that at the thermal ablation position temperature predicted by non-linear and linear DPL models show significant differences. A comparison has been made among non-linear DPL, thermal wave and Pennes model and it has been found that non-linear DPL and thermal wave bio-heat model show almost same nature whereas non-linear Pennes model shows significantly different temperature profile at the initial stage of thermal ablation treatment. The effect of Fourier number and Vernotte number (relaxation Fourier number) on temperature profile in presence and absence of externally applied heat source has been studied in detail and it has been observed that the presence of externally applied heat source term highly affects the efficiency of thermal treatment method. Copyright © 2016 Elsevier Ltd. All rights reserved.
Detection and recognition of simple spatial forms
NASA Technical Reports Server (NTRS)
Watson, A. B.
1983-01-01
A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.
A biological phantom for evaluation of CT image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.
2014-03-01
In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
A Practical Model for Forecasting New Freshman Enrollment during the Application Period.
ERIC Educational Resources Information Center
Paulsen, Michael B.
1989-01-01
A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)
ERIC Educational Resources Information Center
Esteley, Cristina B.; Villarreal, Monica E.; Alagia, Humberto R.
2010-01-01
Over the past several years, we have been exploring and researching a phenomenon that occurs among undergraduate students that we called extension of linear models to non-linear contexts or overgeneralization of linear models. This phenomenon appears when some students use linear representations in situations that are non-linear. In a first phase,…
Rajeswaran, Jeevanantham; Blackstone, Eugene H
2017-02-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time-varying coefficients.
Transportable Maps Software. Volume I.
1982-07-01
being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records
Nanoscale swimmers: hydrodynamic interactions and propulsion of molecular machines
NASA Astrophysics Data System (ADS)
Sakaue, T.; Kapral, R.; Mikhailov, A. S.
2010-06-01
Molecular machines execute nearly regular cyclic conformational changes as a result of ligand binding and product release. This cyclic conformational dynamics is generally non-reciprocal so that under time reversal a different sequence of machine conformations is visited. Since such changes occur in a solvent, coupling to solvent hydrodynamic modes will generally result in self-propulsion of the molecular machine. These effects are investigated for a class of coarse grained models of protein machines consisting of a set of beads interacting through pair-wise additive potentials. Hydrodynamic effects are incorporated through a configuration-dependent mobility tensor, and expressions for the propulsion linear and angular velocities, as well as the stall force, are obtained. In the limit where conformational changes are small so that linear response theory is applicable, it is shown that propulsion is exponentially small; thus, propulsion is nonlinear phenomenon. The results are illustrated by computations on a simple model molecular machine.
NASA Astrophysics Data System (ADS)
Larese, D.; Iachello, F.
2011-06-01
A simple algebraic Hamiltonian has been used to explore the vibrational and rotational spectra of the skeletal bending modes of HCNO, BrCNO, NCNCS, and other ``floppy`` (quasi-linear or quasi-bent) molecules. These molecules have large-amplitude, low-energy bending modes and champagne-bottle potential surfaces, making them good candidates for observing quantum phase transitions (QPT). We describe the geometric phase transitions from bent to linear in these and other non-rigid molecules, quantitatively analysing the spectroscopy signatures of ground state QPT, excited state QPT, and quantum monodromy.The algebraic framework is ideal for this work because of its small calculational effort yet robust results. Although these methods have historically found success with tri- and four-atomic molecules, we now address five-atomic and simple branched molecules such as CH_3NCO and GeH_3NCO. Extraction of potential functions is completed for several molecules, resulting in predictions of barriers to linearity and equilibrium bond angles.
Numerical solution of non-linear dual-phase-lag bioheat transfer equation within skin tissues.
Kumar, Dinesh; Kumar, P; Rai, K N
2017-11-01
This paper deals with numerical modeling and simulation of heat transfer in skin tissues using non-linear dual-phase-lag (DPL) bioheat transfer model under periodic heat flux boundary condition. The blood perfusion is assumed temperature-dependent which results in non-linear DPL bioheat transfer model in order to predict more accurate results. A numerical method of line which is based on finite difference and Runge-Kutta (4,5) schemes, is used to solve the present non-linear problem. Under specific case, the exact solution has been obtained and compared with the present numerical scheme, and we found that those are in good agreement. A comparison based on model selection criterion (AIC) has been made among non-linear DPL models when the variation of blood perfusion rate with temperature is of constant, linear and exponential type with the experimental data and it has been found that non-linear DPL model with exponential variation of blood perfusion rate is closest to the experimental data. In addition, it is found that due to absence of phase-lag phenomena in Pennes bioheat transfer model, it achieves steady state more quickly and always predict higher temperature than thermal and DPL non-linear models. The effect of coefficient of blood perfusion rate, dimensionless heating frequency and Kirchoff number on dimensionless temperature distribution has also been analyzed. The whole analysis is presented in dimensionless form. Copyright © 2017 Elsevier Inc. All rights reserved.
A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.
Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S
2017-06-01
The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LET d based models should be further evaluated in clinically realistic scenarios. © 2017 American Association of Physicists in Medicine.
Kim, Jongrae; Bates, Declan G; Postlethwaite, Ian; Heslop-Harrison, Pat; Cho, Kwang-Hyun
2008-05-15
Inherent non-linearities in biomolecular interactions make the identification of network interactions difficult. One of the principal problems is that all methods based on the use of linear time-invariant models will have fundamental limitations in their capability to infer certain non-linear network interactions. Another difficulty is the multiplicity of possible solutions, since, for a given dataset, there may be many different possible networks which generate the same time-series expression profiles. A novel algorithm for the inference of biomolecular interaction networks from temporal expression data is presented. Linear time-varying models, which can represent a much wider class of time-series data than linear time-invariant models, are employed in the algorithm. From time-series expression profiles, the model parameters are identified by solving a non-linear optimization problem. In order to systematically reduce the set of possible solutions for the optimization problem, a filtering process is performed using a phase-portrait analysis with random numerical perturbations. The proposed approach has the advantages of not requiring the system to be in a stable steady state, of using time-series profiles which have been generated by a single experiment, and of allowing non-linear network interactions to be identified. The ability of the proposed algorithm to correctly infer network interactions is illustrated by its application to three examples: a non-linear model for cAMP oscillations in Dictyostelium discoideum, the cell-cycle data for Saccharomyces cerevisiae and a large-scale non-linear model of a group of synchronized Dictyostelium cells. The software used in this article is available from http://sbie.kaist.ac.kr/software
ERIC Educational Resources Information Center
Armey, Michael F.; Crowther, Janis H.
2008-01-01
Research has identified a significant increase in both the incidence and prevalence of non-suicidal self-injury (NSSI). The present study sought to test both linear and non-linear cusp catastrophe models by using aversive self-awareness, which was operationalized as a composite of aversive self-relevant affect and cognitions, and dissociation as…
A Simple and Accurate Rate-Driven Infiltration Model
NASA Astrophysics Data System (ADS)
Cui, G.; Zhu, J.
2017-12-01
In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.
Effects of the observed J2 variations on the Earth's precession and nutation
NASA Astrophysics Data System (ADS)
Ferrándiz, José M.; Baenas, Tomás; Belda, Santiago
2016-04-01
The Earth's oblateness parameter J2 is closely related to the dynamical ellipticity H, which factorizes the main components of the precession and the different nutation terms. In most theoretical approaches to the Earth's rotation, with IAU2000 nutation theory among them, H is assumed to be constant. The precession model IAU2006 supposes H to have a conventional linear variation, based on the J2 time series derived mainly from satellite laser ranging (SLR) data for decades, which gives rise to an additional quadratic term of the precession in longitude and some corrections of the nutation terms. The time evolution of J2 is, however, too complex to be well approximated by a simple linear model. The effect of more general models including periodic terms and closer to the observed time series, although still unable to reproduce a significant part of the signal, has been seldom investigated. In this work we address the problem of deriving the effect of the observed J2 variations without resorting to such simplified models. The Hamiltonian approach to the Earth rotation is extended to allow the McCullagh's term of the potential to depend on a time-varying oblateness. An analytical solution is derived by means of a suitable perturbation method in the case of the time series provided by the Center for Space Research (CSR) of the University of Texas, which results in non-negligible contributions to the precession-nutation angles. The presentation focuses on the main effects on the longitude of the equator; a noticeable non-linear trend is superimposed to the linear main precession term, along with some periodic and decadal variations.
NASA Astrophysics Data System (ADS)
See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.
2018-04-01
This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.
SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit
Chu, Annie; Cui, Jenny; Dinov, Ivo D.
2011-01-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
A network model of successive partitioning-limited solute diffusion through the stratum corneum.
Schumm, Phillip; Scoglio, Caterina M; van der Merwe, Deon
2010-02-07
As the most exposed point of contact with the external environment, the skin is an important barrier to many chemical exposures, including medications, potentially toxic chemicals and cosmetics. Traditional dermal absorption models treat the stratum corneum lipids as a homogenous medium through which solutes diffuse according to Fick's first law of diffusion. This approach does not explain non-linear absorption and irregular distribution patterns within the stratum corneum lipids as observed in experimental data. A network model, based on successive partitioning-limited solute diffusion through the stratum corneum, where the lipid structure is represented by a large, sparse, and regular network where nodes have variable characteristics, offers an alternative, efficient, and flexible approach to dermal absorption modeling that simulates non-linear absorption data patterns. Four model versions are presented: two linear models, which have unlimited node capacities, and two non-linear models, which have limited node capacities. The non-linear model outputs produce absorption to dose relationships that can be best characterized quantitatively by using power equations, similar to the equations used to describe non-linear experimental data.
Kaiyala, Karl J
2014-01-01
Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit 'local linearity.' Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying 'latent' allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses.
Controlling modal interactions in lasers for frequency selection and power enhancement
NASA Astrophysics Data System (ADS)
Ge, Li
2015-03-01
The laser is an out-of-equilibrium non-linear wave system where the interplay of the cavity geometry and non-linear wave interactions determines the self-organized oscillation frequencies and the associated spatial field patterns. Using the correspondence between nonlinear and linear systems, we propose a simple and systematic method to achieve selective excitation of lasing modes that would have been dwarfed by more dominant ones. The key idea is incorporating the control of modal interaction into the spatial pump profile. Our proposal is most valuable in the regime of spatially and spectrally overlapping modes, which can lead to a significant enhancement of laser power as well.
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
ERIC Educational Resources Information Center
Esteley, Cristina; Villarreal, Monica; Alagia, Humberto
2004-01-01
This research report presents a study of the work of agronomy majors in which an extension of linear models to non-linear contexts can be observed. By linear models we mean the model y=a.x+b, some particular representations of direct proportionality and the diagram for the rule of three. Its presence and persistence in different types of problems…
Kepler Observations of Rapid Optical Variability in the BL Lac Object W2r192+42
NASA Technical Reports Server (NTRS)
R.Edelson; Mushotzky, R.; Vaughn, S.; Scargle, J.; Gandhi, P.; Malkan, M.; Baumgartner, W.
2013-01-01
We present the first Kepler monitoring of a strongly variable BL Lac, W2R1926+42. The light curve covers 181 days with approx. 0.2% errors, 30 minute sampling and >90% duty cycle, showing numerous delta-I/I > 25% flares over timescales as short as a day. The flux distribution is highly skewed and non-Gaussian. The variability shows a strong rms-flux correlation with the clearest evidence to date for non-linearity in this relation. We introduce a method to measure periodograms from the discrete autocorrelation function, an approach that may be well-suited to a wide range of Kepler data. The periodogram is not consistent with a simple power-law, but shows a flattening at frequencies below 7x10(exp -5) Hz. Simple models of the power spectrum, such as a broken power law, do not produce acceptable fits, indicating that the Kepler blazar light curve requires more sophisticated mathematical and physical descriptions than currently in use.
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Gravimetric control of active volcanic processes
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Stiros, Stathis
2017-04-01
Volcanic activity includes phases of magma chamber inflation and deflation, produced by movement of magma and/or hydrothermal processes. Such effects usually leave their imprint as deformation of the ground surfaces which can be recorded by GNSS and other methods, on one hand, and on the other hand they can be modeled as elastic deformation processes, with deformation produced by volcanic masses of finite dimensions such as spheres, ellipsoids and parallelograms. Such volumes are modeled on the basis of inversion (non-linear, numerical solution) of systems of equations relating the unknown dimensions and location of magma sources with observations, currently mostly GNSS and INSAR data. Inversion techniques depend on the misfit between model predictions and observations, but because systems of equations are highly non-linear, and because adopted models for the geometry of magma sources is simple, non-unique solutions can be derived, constrained by local extrema. Assessment of derived magma models can be provided by independent observations and models, such as micro-seismicity distribution and changes in geophysical parameters. In the simplest case magmatic intrusions can be modeled as spheres with diameters of at least a few tens of meters at a depth of a few kilometers; hence they are expected to have a gravimetric signature in permanent recording stations on the ground surface, while larger intrusions may also have an imprint in sensors in orbit around the earth or along precisely defined air paths. Identification of such gravimetric signals and separation of the "true" signal from the measurement and ambient noise requires fine forward modeling of the wider areas based on realistic simulation of the ambient gravimetric field, and then modeling of its possible distortion because of magmatic anomalies. Such results are useful to remove ambiguities in inverse modeling of ground deformation, and also to detect magmatic anomalies offshore.
Stable clustering and the resolution of dissipationless cosmological N-body simulations
NASA Astrophysics Data System (ADS)
Benhaiem, David; Joyce, Michael; Sylos Labini, Francesco
2017-10-01
The determination of the resolution of cosmological N-body simulations, I.e. the range of scales in which quantities measured in them represent accurately the continuum limit, is an important open question. We address it here using scale-free models, for which self-similarity provides a powerful tool to control resolution. Such models also provide a robust testing ground for the so-called stable clustering approximation, which gives simple predictions for them. Studying large N-body simulations of such models with different force smoothing, we find that these two issues are in fact very closely related: our conclusion is that the accuracy of two-point statistics in the non-linear regime starts to degrade strongly around the scale at which their behaviour deviates from that predicted by the stable clustering hypothesis. Physically the association of the two scales is in fact simple to understand: stable clustering fails to be a good approximation when there are strong interactions of structures (in particular merging) and it is precisely such non-linear processes which are sensitive to fluctuations at the smaller scales affected by discretization. Resolution may be further degraded if the short distance gravitational smoothing scale is larger than the scale to which stable clustering can propagate. We examine in detail the very different conclusions of studies by Smith et al. and Widrow et al. and find that the strong deviations from stable clustering reported by these works are the results of over-optimistic assumptions about scales resolved accurately by the measured power spectra, and the reliance on Fourier space analysis. We emphasize the much poorer resolution obtained with the power spectrum compared to the two-point correlation function.
NASA Astrophysics Data System (ADS)
Unger, Johannes; Hametner, Christoph; Jakubek, Stefan; Quasthoff, Marcus
2014-12-01
An accurate state of charge (SoC) estimation of a traction battery in hybrid electric non-road vehicles, which possess higher dynamics and power densities than on-road vehicles, requires a precise battery cell terminal voltage model. This paper presents a novel methodology for non-linear system identification of battery cells to obtain precise battery models. The methodology comprises the architecture of local model networks (LMN) and optimal model based design of experiments (DoE). Three main novelties are proposed: 1) Optimal model based DoE, which aims to high dynamically excite the battery cells at load ranges frequently used in operation. 2) The integration of corresponding inputs in the LMN to regard the non-linearities SoC, relaxation, hysteresis as well as temperature effects. 3) Enhancements to the local linear model tree (LOLIMOT) construction algorithm, to achieve a physical appropriate interpretation of the LMN. The framework is applicable for different battery cell chemistries and different temperatures, and is real time capable, which is shown on an industrial PC. The accuracy of the obtained non-linear battery model is demonstrated on cells with different chemistries and temperatures. The results show significant improvement due to optimal experiment design and integration of the battery non-linearities within the LMN structure.
Foldnes, Njål; Olsson, Ulf Henning
2016-01-01
We present and investigate a simple way to generate nonnormal data using linear combinations of independent generator (IG) variables. The simulated data have prespecified univariate skewness and kurtosis and a given covariance matrix. In contrast to the widely used Vale-Maurelli (VM) transform, the obtained data are shown to have a non-Gaussian copula. We analytically obtain asymptotic robustness conditions for the IG distribution. We show empirically that popular test statistics in covariance analysis tend to reject true models more often under the IG transform than under the VM transform. This implies that overly optimistic evaluations of estimators and fit statistics in covariance structure analysis may be tempered by including the IG transform for nonnormal data generation. We provide an implementation of the IG transform in the R environment.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Limits of detection and decision. Part 3
NASA Astrophysics Data System (ADS)
Voigtman, E.
2008-02-01
It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on 'critical values of the non-centrality parameter of the non-central t distribution', is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, "hockey stick" and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.
A Vernacular for Linear Latent Growth Models
ERIC Educational Resources Information Center
Hancock, Gregory R.; Choi, Jaehwa
2006-01-01
In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…
Birth and death of protein domains: A simple model of evolution explains power law behavior
Karev, Georgy P; Wolf, Yuri I; Rzhetsky, Andrey Y; Berezovskaya, Faina S; Koonin, Eugene V
2002-01-01
Background Power distributions appear in numerous biological, physical and other contexts, which appear to be fundamentally different. In biology, power laws have been claimed to describe the distributions of the connections of enzymes and metabolites in metabolic networks, the number of interactions partners of a given protein, the number of members in paralogous families, and other quantities. In network analysis, power laws imply evolution of the network with preferential attachment, i.e. a greater likelihood of nodes being added to pre-existing hubs. Exploration of different types of evolutionary models in an attempt to determine which of them lead to power law distributions has the potential of revealing non-trivial aspects of genome evolution. Results A simple model of evolution of the domain composition of proteomes was developed, with the following elementary processes: i) domain birth (duplication with divergence), ii) death (inactivation and/or deletion), and iii) innovation (emergence from non-coding or non-globular sequences or acquisition via horizontal gene transfer). This formalism can be described as a birth, death and innovation model (BDIM). The formulas for equilibrium frequencies of domain families of different size and the total number of families at equilibrium are derived for a general BDIM. All asymptotics of equilibrium frequencies of domain families possible for the given type of models are found and their appearance depending on model parameters is investigated. It is proved that the power law asymptotics appears if, and only if, the model is balanced, i.e. domain duplication and deletion rates are asymptotically equal up to the second order. It is further proved that any power asymptotic with the degree not equal to -1 can appear only if the hypothesis of independence of the duplication/deletion rates on the size of a domain family is rejected. Specific cases of BDIMs, namely simple, linear, polynomial and rational models, are considered in details and the distributions of the equilibrium frequencies of domain families of different size are determined for each case. We apply the BDIM formalism to the analysis of the domain family size distributions in prokaryotic and eukaryotic proteomes and show an excellent fit between these empirical data and a particular form of the model, the second-order balanced linear BDIM. Calculation of the parameters of these models suggests surprisingly high innovation rates, comparable to the total domain birth (duplication) and elimination rates, particularly for prokaryotic genomes. Conclusions We show that a straightforward model of genome evolution, which does not explicitly include selection, is sufficient to explain the observed distributions of domain family sizes, in which power laws appear as asymptotic. However, for the model to be compatible with the data, there has to be a precise balance between domain birth, death and innovation rates, and this is likely to be maintained by selection. The developed approach is oriented at a mathematical description of evolution of domain composition of proteomes, but a simple reformulation could be applied to models of other evolving networks with preferential attachment. PMID:12379152
Birth and death of protein domains: a simple model of evolution explains power law behavior.
Karev, Georgy P; Wolf, Yuri I; Rzhetsky, Andrey Y; Berezovskaya, Faina S; Koonin, Eugene V
2002-10-14
Power distributions appear in numerous biological, physical and other contexts, which appear to be fundamentally different. In biology, power laws have been claimed to describe the distributions of the connections of enzymes and metabolites in metabolic networks, the number of interactions partners of a given protein, the number of members in paralogous families, and other quantities. In network analysis, power laws imply evolution of the network with preferential attachment, i.e. a greater likelihood of nodes being added to pre-existing hubs. Exploration of different types of evolutionary models in an attempt to determine which of them lead to power law distributions has the potential of revealing non-trivial aspects of genome evolution. A simple model of evolution of the domain composition of proteomes was developed, with the following elementary processes: i) domain birth (duplication with divergence), ii) death (inactivation and/or deletion), and iii) innovation (emergence from non-coding or non-globular sequences or acquisition via horizontal gene transfer). This formalism can be described as a birth, death and innovation model (BDIM). The formulas for equilibrium frequencies of domain families of different size and the total number of families at equilibrium are derived for a general BDIM. All asymptotics of equilibrium frequencies of domain families possible for the given type of models are found and their appearance depending on model parameters is investigated. It is proved that the power law asymptotics appears if, and only if, the model is balanced, i.e. domain duplication and deletion rates are asymptotically equal up to the second order. It is further proved that any power asymptotic with the degree not equal to -1 can appear only if the hypothesis of independence of the duplication/deletion rates on the size of a domain family is rejected. Specific cases of BDIMs, namely simple, linear, polynomial and rational models, are considered in details and the distributions of the equilibrium frequencies of domain families of different size are determined for each case. We apply the BDIM formalism to the analysis of the domain family size distributions in prokaryotic and eukaryotic proteomes and show an excellent fit between these empirical data and a particular form of the model, the second-order balanced linear BDIM. Calculation of the parameters of these models suggests surprisingly high innovation rates, comparable to the total domain birth (duplication) and elimination rates, particularly for prokaryotic genomes. We show that a straightforward model of genome evolution, which does not explicitly include selection, is sufficient to explain the observed distributions of domain family sizes, in which power laws appear as asymptotic. However, for the model to be compatible with the data, there has to be a precise balance between domain birth, death and innovation rates, and this is likely to be maintained by selection. The developed approach is oriented at a mathematical description of evolution of domain composition of proteomes, but a simple reformulation could be applied to models of other evolving networks with preferential attachment.
Waveform Design for Wireless Power Transfer
NASA Astrophysics Data System (ADS)
Clerckx, Bruno; Bayguzina, Ekaterina
2016-12-01
Far-field Wireless Power Transfer (WPT) has attracted significant attention in recent years. Despite the rapid progress, the emphasis of the research community in the last decade has remained largely concentrated on improving the design of energy harvester (so-called rectenna) and has left aside the effect of transmitter design. In this paper, we study the design of transmit waveform so as to enhance the DC power at the output of the rectenna. We derive a tractable model of the non-linearity of the rectenna and compare with a linear model conventionally used in the literature. We then use those models to design novel multisine waveforms that are adaptive to the channel state information (CSI). Interestingly, while the linear model favours narrowband transmission with all the power allocated to a single frequency, the non-linear model favours a power allocation over multiple frequencies. Through realistic simulations, waveforms designed based on the non-linear model are shown to provide significant gains (in terms of harvested DC power) over those designed based on the linear model and over non-adaptive waveforms. We also compute analytically the theoretical scaling laws of the harvested energy for various waveforms as a function of the number of sinewaves and transmit antennas. Those scaling laws highlight the benefits of CSI knowledge at the transmitter in WPT and of a WPT design based on a non-linear rectenna model over a linear model. Results also motivate the study of a promising architecture relying on large-scale multisine multi-antenna waveforms for WPT. As a final note, results stress the importance of modeling and accounting for the non-linearity of the rectenna in any system design involving wireless power.
Least squares reconstruction of non-linear RF phase encoded MR data.
Salajeghe, Somaie; Babyn, Paul; Sharp, Jonathan C; Sarty, Gordon E
2016-09-01
The numerical feasibility of reconstructing MRI signals generated by RF coils that produce B1 fields with a non-linearly varying spatial phase is explored. A global linear spatial phase variation of B1 is difficult to produce from current confined to RF coils. Here we use regularized least squares inversion, in place of the usual Fourier transform, to reconstruct signals generated in B1 fields with non-linear phase variation. RF encoded signals were simulated for three RF coil configurations: ideal linear, parallel conductors and, circular coil pairs. The simulated signals were reconstructed by Fourier transform and by regularized least squares. The Fourier reconstruction of simulated RF encoded signals from the parallel conductor coil set showed minor distortions over the reconstruction of signals from the ideal linear coil set but the Fourier reconstruction of signals from the circular coil set produced severe geometric distortion. Least squares inversion in all cases produced reconstruction errors comparable to the Fourier reconstruction of the simulated signal from the ideal linear coil set. MRI signals encoded in B1 fields with non-linearly varying spatial phase may be accurately reconstructed using regularized least squares thus pointing the way to the use of simple RF coil designs for RF encoded MRI. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Lessons from Jurassic Park: patients as complex adaptive systems.
Katerndahl, David A
2009-08-01
With realization that non-linearity is generally the rule rather than the exception in nature, viewing patients and families as complex adaptive systems may lead to a better understanding of health and illness. Doctors who successfully practise the 'art' of medicine may recognize non-linear principles at work without having the jargon needed to label them. Complex adaptive systems are systems composed of multiple components that display complexity and adaptation to input. These systems consist of self-organized components, which display complex dynamics, ranging from simple periodicity to chaotic and random patterns showing trends over time. Understanding the non-linear dynamics of phenomena both internal and external to our patients can (1) improve our definition of 'health'; (2) improve our understanding of patients, disease and the systems in which they converge; (3) be applied to future monitoring systems; and (4) be used to possibly engineer change. Such a non-linear view of the world is quite congruent with the generalist perspective.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
NASA Astrophysics Data System (ADS)
Manzoor, Ali; Rafique, Sajid; Usman Iftikhar, Muhammad; Mahmood Ul Hassan, Khalid; Nasir, Ali
2017-08-01
Piezoelectric vibration energy harvester (PVEH) consists of a cantilever bimorph with piezoelectric layers pasted on its top and bottom, which can harvest power from vibrations and feed to low power wireless sensor nodes through some power conditioning circuit. In this paper, a non-linear conditioning circuit, consisting of a full-bridge rectifier followed by a buck-boost converter, is employed to investigate the issues of electrical side of the energy harvesting system. An integrated mathematical model of complete electromechanical system has been developed. Previously, researchers have studied PVEH with sophisticated piezo-beam models but employed simplistic linear circuits, such as resistor, as electrical load. In contrast, other researchers have worked on more complex non-linear circuits but with over-simplified piezo-beam models. Such models neglect different aspects of the system which result from complex interactions of its electrical and mechanical subsystems. In this work, authors have integrated the distributed parameter-based model of piezo-beam presented in literature with a real world non-linear electrical load. Then, the developed integrated model is employed to analyse the stability of complete energy harvesting system. This work provides a more realistic and useful electromechanical model having realistic non-linear electrical load unlike the simplistic linear circuit elements employed by many researchers.
Formal methods for modeling and analysis of hybrid systems
NASA Technical Reports Server (NTRS)
Tiwari, Ashish (Inventor); Lincoln, Patrick D. (Inventor)
2009-01-01
A technique based on the use of a quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata is taught. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. The technique works on linear and non-linear polynomial hybrid systems: the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. An exemplar tool in the SAL environment built over the theorem prover PVS is detailed. The technique scales well to large and complex hybrid systems.
NASA Astrophysics Data System (ADS)
Mohan, Nisha
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches. A fundamental question that motivates the modeling of foams is `how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?' A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,"Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes," J. Mech.Phys. Solids, 59, pp. 2227--2237, Erratum 60, 1753-1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like 1) The initial linear elastic response. 2) One or more nonlinear instabilities, yielding, and hardening. The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Rigatos, Gerasimos G
2016-06-01
It is proven that the model of the p53-mdm2 protein synthesis loop is a differentially flat one and using a diffeomorphism (change of state variables) that is proposed by differential flatness theory it is shown that the protein synthesis model can be transformed into the canonical (Brunovsky) form. This enables the design of a feedback control law that maintains the concentration of the p53 protein at the desirable levels. To estimate the non-measurable elements of the state vector describing the p53-mdm2 system dynamics, the derivative-free non-linear Kalman filter is used. Moreover, to compensate for modelling uncertainties and external disturbances that affect the p53-mdm2 system, the derivative-free non-linear Kalman filter is re-designed as a disturbance observer. The derivative-free non-linear Kalman filter consists of the Kalman filter recursion applied on the linearised equivalent of the protein synthesis model together with an inverse transformation based on differential flatness theory that enables to retrieve estimates for the state variables of the initial non-linear model. The proposed non-linear feedback control and perturbations compensation method for the p53-mdm2 system can result in more efficient chemotherapy schemes where the infusion of medication will be better administered.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
Mafart, P; Leguérinel, I; Couvert, O; Coroller, L
2010-08-01
The assessment and optimization of food heating processes require knowledge of the thermal resistance of target spores. Although the concept of spore resistance may seem simple, the establishment of a reliable quantification system for characterizing the heat resistance of spores has proven far more complex than imagined by early researchers. This paper points out the main difficulties encountered by reviewing the historical works on the subject. During an early period, the concept of individual spore resistance had not yet been considered and the resistance of a strain of spore-forming bacterium was related to a global population regarded as alive or dead. A second period was opened by the introduction of the well-known D parameter (decimal reduction time) associated with the previously introduced z-concept. The present period has introduced three new sources of complexity: consideration of non log-linear survival curves, consideration of environmental factors other than temperature, and awareness of the variability of resistance parameters. The occurrence of non log-linear survival curves makes spore resistance dependent on heating time. Consequently, spore resistance characterisation requires at least two parameters. While early resistance models took only heating temperature into account, new models consider other environmental factors such as pH and water activity ("horizontal extension"). Similarly the new generation of models also considers certain environmental factors of the recovery medium for quantifying "apparent heat resistance" ("vertical extension"). Because the conventional F-value is no longer additive in cases of non log-linear survival curves, the decimal reduction ratio should be preferred for assessing the efficiency of a heating process. Copyright 2010 Elsevier Ltd. All rights reserved.
Kumar, K Vasanth; Sivanesan, S
2006-08-25
Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.
Effect of non-linearity in predicting doppler waveforms through a novel model
Gayasen, Aman; Dua, Sunil Kumar; Sengupta, Amit; Nagchoudhuri, D
2003-01-01
Background In pregnancy, the uteroplacental vascular system develops de novo locally in utero and a systemic haemodynamic & bio-rheological alteration accompany it. Any abnormality in the non-linear vascular system is believed to trigger the onset of serious morbid conditions like pre-eclampsia and/or intrauterine growth restriction (IUGR). Exact Aetiopathogenesis is unknown. Advancement in the field of non-invasive doppler image analysis and simulation incorporating non-linearities may unfold the complexities associated with the inaccessible uteroplacental vessels. Earlier modeling approaches approximate it as a linear system. Method We proposed a novel electrical model for the uteroplacental system that uses MOSFETs as non-linear elements in place of traditional linear transmission line (TL) model. The model to simulate doppler FVW's was designed by including the inputs from our non-linear mathematical model. While using the MOSFETs as voltage-controlled switches, a fair degree of controlled-non-linearity has been introduced in the model. Comparative analysis was done between the simulated data and the actual doppler FVW's waveforms. Results & Discussion Normal pregnancy has been successfully modeled and the doppler output waveforms are simulated for different gestation time using the model. It is observed that the dicrotic notch disappears and the S/D ratio decreases as the pregnancy matures. Both these results are established clinical facts. Effects of blood density, viscosity and the arterial wall elasticity on the blood flow velocity profile were also studied. Spectral analysis on the output of the model (blood flow velocity) indicated that the Total Harmonic Distortion (THD) falls during the mid-gestation. Conclusion Total harmonic distortion (THD) is found to be informative in determining the Feto-maternal health. Effects of the blood density, the viscosity and the elasticity changes on the blood FVW are simulated. Future works are expected to concentrate mainly on improving the load with respect to varying non-linear parameters in the model. Heart rate variability, which accounts for the vascular tone, should also be included. We also expect the model to initiate extensive clinical or experimental studies in the near future. PMID:14561227
Partner symmetries and non-invariant solutions of four-dimensional heavenly equations
NASA Astrophysics Data System (ADS)
Malykh, A. A.; Nutku, Y.; Sheftel, M. B.
2004-07-01
We extend our method of partner symmetries to the hyperbolic complex Monge-Ampère equation and the second heavenly equation of Plebañski. We show the existence of partner symmetries and derive the relations between them. For certain simple choices of partner symmetries the resulting differential constraints together with the original heavenly equations are transformed to systems of linear equations by an appropriate Legendre transformation. The solutions of these linear equations are generically non-invariant. As a consequence we obtain explicitly new classes of heavenly metrics without Killing vectors.
NASA Technical Reports Server (NTRS)
Hada, Megumi; George, Kerry A.; Cucinotta, F. A.
2011-01-01
The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivor with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (.01 - 0.2 Gy) of 170 MeV/u Si-28-ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving >2 breaks in 2 or more chromosomes). The curves for doses above 0.1 Gy were more than one ion traverses a cell showed linear dose responses. However, for doses less than 0.1 Gy, Si-28-ions showed no dose response, suggesting a non-targeted effect when less than one ion traversal occurs. Additional findings for Fe-56 will be discussed.
A review on prognostic techniques for non-stationary and non-linear rotating systems
NASA Astrophysics Data System (ADS)
Kan, Man Shan; Tan, Andy C. C.; Mathew, Joseph
2015-10-01
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
Sahin, Rubina; Tapadia, Kavita
2015-01-01
The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG < 0) and endothermic (ΔH > 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.
Armey, Michael F; Crowther, Janis H
2008-02-01
Research has identified a significant increase in both the incidence and prevalence of non-suicidal self-injury (NSSI). The present study sought to test both linear and non-linear cusp catastrophe models by using aversive self-awareness, which was operationalized as a composite of aversive self-relevant affect and cognitions, and dissociation as predictors of NSSI. The cusp catastrophe model evidenced a better fit to the data, accounting for 6 times the variance (66%) of a linear model (9%-10%). These results support models of NSSI implicating emotion regulation deficits and experiential avoidance in the occurrence of NSSI and provide preliminary support for the use of cusp catastrophe models to study certain types of low base rate psychopathology such as NSSI. These findings suggest novel approaches to prevention and treatment of NSSI as well.
Mathematical Modeling of the Dynamics of Salmonella Cerro Infection in a US Dairy Herd
NASA Astrophysics Data System (ADS)
Chapagain, Prem; van Kessel, Jo Ann; Karns, Jeffrey; Wolfgang, David; Schukken, Ynte; Grohn, Yrjo
2006-03-01
Salmonellosis has been one of the major causes of human foodborne illness in the US. The high prevalence of infections makes transmission dynamics of Salmonella in a farm environment of interest both from animal and human health perspectives. Mathematical modeling approaches are increasingly being applied to understand the dynamics of various infectious diseases in dairy herds. Here, we describe the transmission dynamics of Salmonella infection in a dairy herd with a set of non-linear differential equations. Although the infection dynamics of different serotypes of Salmonella in cattle are likely to be different, we find that a relatively simple SIR-type model can describe the observed dynamics of the Salmonella enterica serotype Cerro infection in the herd.
Numerical tests of local scale invariance in ageing q-state Potts models
NASA Astrophysics Data System (ADS)
Lorenz, E.; Janke, W.
2007-01-01
Much effort has been spent over the last years to achieve a coherent theoretical description of ageing as a non-linear dynamics process. Long supposed to be a consequence of the slow dynamics of glassy systems only, ageing phenomena could also be identified in the phase-ordering kinetics of simple ferromagnets. As a phenomenological approach Henkel et al. developed a group of local scale transformations under which two-time autocorrelation and response functions should transform covariantly. This work is to extend previous numerical tests of the predicted scaling functions for the Ising model by Monte Carlo simulations of two-dimensional q-state Potts models with q=3 and 8, which, in equilibrium, undergo temperature-driven phase transitions of second and first order, respectively.
Frequency-domain full-waveform inversion with non-linear descent directions
NASA Astrophysics Data System (ADS)
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.
The entrainment matrix of a superfluid nucleon mixture at finite temperatures
NASA Astrophysics Data System (ADS)
Leinson, Lev B.
2018-06-01
It is considered a closed system of non-linear equations for the entrainment matrix of a non-relativistic mixture of superfluid nucleons at arbitrary temperatures below the onset of neutron superfluidity, which takes into account the essential dependence of the superfluid energy gap in the nucleon spectra on the velocities of superfluid flows. It is assumed that the protons condense into the isotropic 1S0 state, and the neutrons are paired into the spin-triplet 3P2 state. It is derived an analytic solution to the non-linear equations for the entrainment matrix under temperatures just below the critical value for the neutron superfluidity onset. In general case of an arbitrary temperature of the superfluid mixture the non-linear equations are solved numerically and fitted by simple formulas convenient for a practical use with an arbitrary set of the Landau parameters.
An Open-Source Galaxy Redshift Survey Simulator for next-generation Large Scale Structure Surveys
NASA Astrophysics Data System (ADS)
Seijak, Uros
Galaxy redshift surveys produce three-dimensional maps of the galaxy distribution. On large scales these maps trace the underlying matter fluctuations in a relatively simple manner, so that the properties of the primordial fluctuations along with the overall expansion history and growth of perturbations can be extracted. The BAO standard ruler method to measure the expansion history of the universe using galaxy redshift surveys is thought to be robust to observational artifacts and understood theoretically with high precision. These same surveys can offer a host of additional information, including a measurement of the growth rate of large scale structure through redshift space distortions, the possibility of measuring the sum of neutrino masses, tighter constraints on the expansion history through the Alcock-Paczynski effect, and constraints on the scale-dependence and non-Gaussianity of the primordial fluctuations. Extracting this broadband clustering information hinges on both our ability to minimize and subtract observational systematics to the observed galaxy power spectrum, and our ability to model the broadband behavior of the observed galaxy power spectrum with exquisite precision. Rapid development on both fronts is required to capitalize on WFIRST's data set. We propose to develop an open-source computational toolbox that will propel development in both areas by connecting large scale structure modeling and instrument and survey modeling with the statistical inference process. We will use the proposed simulator to both tailor perturbation theory and fully non-linear models of the broadband clustering of WFIRST galaxies and discover novel observables in the non-linear regime that are robust to observational systematics and able to distinguish between a wide range of spatial and dynamic biasing models for the WFIRST galaxy redshift survey sources. We have demonstrated the utility of this approach in a pilot study of the SDSS-III BOSS galaxies, in which we improved the redshift space distortion growth rate measurement precision by a factor of 2.5 using customized clustering statistics in the non-linear regime that were immunized against observational systematics. We look forward to addressing the unique challenges of modeling and empirically characterizing the WFIRST galaxies and observational systematics.
Kaiyala, Karl J.
2014-01-01
Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit ‘local linearity.’ Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying ‘latent’ allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses. PMID:25068692
Imprints of non-standard dark energy and dark matter models on the 21cm intensity map power spectrum
NASA Astrophysics Data System (ADS)
Carucci, Isabella P.; Corasaniti, Pier-Stefano; Viel, Matteo
2017-12-01
We study the imprint of non-standard dark energy (DE) and dark matter (DM) models on the 21cm intensity map power spectra from high-redshift neutral hydrogen (HI) gas. To this purpose we use halo catalogs from N-body simulations of dynamical DE models and DM scenarios which are as successful as the standard Cold Dark Matter model with Cosmological Constant (ΛCDM) at interpreting available cosmological observations. We limit our analysis to halo catalogs at redshift z=1 and 2.3 which are common to all simulations. For each catalog we model the HI distribution by using a simple prescription to associate the HI gas mass to N-body halos. We find that the DE models leave a distinct signature on the HI spectra across a wide range of scales, which correlates with differences in the halo mass function and the onset of the non-linear regime of clustering. In the case of the non-standard DM model significant differences of the HI spectra with respect to the ΛCDM model only arise from the suppressed abundance of low mass halos. These cosmological model dependent features also appear in the 21cm spectra. In particular, we find that future SKA measurements can distinguish the imprints of DE and DM models at high statistical significance.
Sub-cellular mRNA localization modulates the regulation of gene expression by small RNAs in bacteria
NASA Astrophysics Data System (ADS)
Teimouri, Hamid; Korkmazhan, Elgin; Stavans, Joel; Levine, Erel
2017-10-01
Small non-coding RNAs can exert significant regulatory activity on gene expression in bacteria. In recent years, substantial progress has been made in understanding bacterial gene expression by sRNAs. However, recent findings that demonstrate that families of mRNAs show non-trivial sub-cellular distributions raise the question of how localization may affect the regulatory activity of sRNAs. Here we address this question within a simple mathematical model. We show that the non-uniform spatial distributions of mRNA can alter the threshold-linear response that characterizes sRNAs that act stoichiometrically, and modulate the hierarchy among targets co-regulated by the same sRNA. We also identify conditions where the sub-cellular organization of cofactors in the sRNA pathway can induce spatial heterogeneity on sRNA targets. Our results suggest that under certain conditions, interpretation and modeling of natural and synthetic gene regulatory circuits need to take into account the spatial organization of the transcripts of participating genes.
NASA Astrophysics Data System (ADS)
Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko
2015-04-01
Fickian transport in groundwater flow is the exception rather than the rule. Transport in porous media is frequently simulated via particle methods (i.e. particle tracking random walk (PTRW) or continuous time random walk (CTRW)). These methods formulate transport as a stochastic process of particle position increments. At the pore scale, geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Hence, it is important to get a better understanding of the processes at pore scale. For our analysis we track the positions of 10.000 particles migrating through the pore space over time. The data we use come from micro CT scans of a homogeneous sandstone and encompass about 10 grain sizes. Based on those images we discretize the pore structure and simulate flow at the pore scale based on the Navier-Stokes equation. This flow field realistically describes flow inside the pore space and we do not need to add artificial dispersion during the transport simulation. Next, we use particle tracking random walk and simulate pore-scale transport. Finally, we use the obtained particle trajectories to do a multivariate statistical analysis of the particle motion at the pore scale. Our analysis is based on copulas. Every multivariate joint distribution is a combination of its univariate marginal distributions. The copula represents the dependence structure of those univariate marginals and is therefore useful to observe correlation and non-Gaussian interactions (i.e. non-Fickian transport). The first goal of this analysis is to better understand the validity regions of commonly made assumptions. We are investigating three different transport distances: 1) The distance where the statistical dependence between particle increments can be modelled as an order-one Markov process. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks start. 2) The distance where bivariate statistical dependence simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW/CTRW). 3) The distance of complete statistical independence (validity of classical PTRW/CTRW). The second objective is to reveal characteristic dependencies influencing transport the most. Those dependencies can be very complex. Copulas are highly capable of representing linear dependence as well as non-linear dependence. With that tool we are able to detect persistent characteristics dominating transport even across different scales. The results derived from our experimental data set suggest that there are many more non-Fickian aspects of pore-scale transport than the univariate statistics of longitudinal displacements. Non-Fickianity can also be found in transverse displacements, and in the relations between increments at different time steps. Also, the found dependence is non-linear (i.e. beyond simple correlation) and persists over long distances. Thus, our results strongly support the further refinement of techniques like correlated PTRW or correlated CTRW towards non-linear statistical relations.
NASA Astrophysics Data System (ADS)
Ke, Haohao; Ondov, John M.; Rogge, Wolfgang F.
2013-12-01
Composite chemical profiles of motor vehicle emissions were extracted from ambient measurements at a near-road site in Baltimore during a windless traffic episode in November, 2002, using four independent approaches, i.e., simple peak analysis, windless model-based linear regression, PMF, and UNMIX. Although the profiles are in general agreement, the windless-model-based profile treatment more effectively removes interference from non-traffic sources and is deemed to be more accurate for many species. In addition to abundances of routine pollutants (e.g., NOx, CO, PM2.5, EC, OC, sulfate, and nitrate), 11 particle-bound metals and 51 individual traffic-related organic compounds (including n-alkanes, PAHs, oxy-PAHs, hopanes, alkylcyclohexanes, and others) were included in the modeling.
Bread dough rheology: Computing with a damage function model
NASA Astrophysics Data System (ADS)
Tanner, Roger I.; Qi, Fuzhong; Dai, Shaocong
2015-01-01
We describe an improved damage function model for bread dough rheology. The model has relatively few parameters, all of which can easily be found from simple experiments. Small deformations in the linear region are described by a gel-like power-law memory function. A set of large non-reversing deformations - stress relaxation after a step of shear, steady shearing and elongation beginning from rest, and biaxial stretching, is used to test the model. With the introduction of a revised strain measure which includes a Mooney-Rivlin term, all of these motions can be well described by the damage function described in previous papers. For reversing step strains, larger amplitude oscillatory shearing and recoil reasonable predictions have been found. The numerical methods used are discussed and we give some examples.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
The cerebellum for jocks and nerds alike.
Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.
The cerebellum for jocks and nerds alike
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Non-linear models for the detection of impaired cerebral blood flow autoregulation.
Chacón, Max; Jara, José Luis; Miranda, Rodrigo; Katsogridakis, Emmanuel; Panerai, Ronney B
2018-01-01
The ability to discriminate between normal and impaired dynamic cerebral autoregulation (CA), based on measurements of spontaneous fluctuations in arterial blood pressure (BP) and cerebral blood flow (CBF), has considerable clinical relevance. We studied 45 normal subjects at rest and under hypercapnia induced by breathing a mixture of carbon dioxide and air. Non-linear models with BP as input and CBF velocity (CBFV) as output, were implemented with support vector machines (SVM) using separate recordings for learning and validation. Dynamic SVM implementations used either moving average or autoregressive structures. The efficiency of dynamic CA was estimated from the model's derived CBFV response to a step change in BP as an autoregulation index for both linear and non-linear models. Non-linear models with recurrences (autoregressive) showed the best results, with CA indexes of 5.9 ± 1.5 in normocapnia, and 2.5 ± 1.2 for hypercapnia with an area under the receiver-operator curve of 0.955. The high performance achieved by non-linear SVM models to detect deterioration of dynamic CA should encourage further assessment of its applicability to clinical conditions where CA might be impaired.
Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia
2012-04-01
This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) < 0.001) in bilateral HG and STG. Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p < 0.05). Subsequent model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with the activity in STG influencing the STG-HG connectivity non-linearly.
Stress modeling in colloidal dispersions undergoing non-viscometric flows
NASA Astrophysics Data System (ADS)
Dolata, Benjamin; Zia, Roseanna
2017-11-01
We present a theoretical study of the stress tensor for a colloidal dispersion undergoing non-viscometric flow. In such flows, the non-homogeneous suspension stress depends on not only the local average total stresslet-the sum of symmetric first moments of both the hydrodynamic traction and the interparticle force-but also on the average quadrupole, octupole, and higher-order moments. To compute the average moments, we formulate a six dimensional Smoluchowski equation governing the microstructural evolution of a suspension in an arbitrary fluid velocity field. Under the conditions of rheologically slow flow, where the Brownian relaxation of the particles is much faster than the spatiotemporal evolution of the flow, the Smoluchowski equation permits asymptotic solution, revealing a suspension stress that follows a second-order fluid constitutive model. We obtain a reciprocal theorem and utilize it to show that all constitutive parameters of the second-order fluid model may be obtained from two simpler linear-response problems: a suspension undergoing simple shear and a suspension undergoing isotropic expansion. The consequences of relaxing the assumption of rheologically slow flow, including the appearance of memory and microcontinuum behaviors, are discussed.
Roberts, Steven; Martin, Michael A
2007-06-01
The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.
A moist Boussinesq shallow water equations set for testing atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerroukat, M., E-mail: mohamed.zerroukat@metoffice.gov.uk; Allen, T.
The shallow water equations have long been used as an initial test for numerical methods applied to atmospheric models with the test suite of Williamson et al. being used extensively for validating new schemes and assessing their accuracy. However the lack of physics forcing within this simplified framework often requires numerical techniques to be reworked when applied to fully three dimensional models. In this paper a novel two-dimensional shallow water equations system that retains moist processes is derived. This system is derived from three-dimensional Boussinesq approximation of the hydrostatic Euler equations where, unlike the classical shallow water set, we allowmore » the density to vary slightly with temperature. This results in extra (or buoyancy) terms for the momentum equations, through which a two-way moist-physics dynamics feedback is achieved. The temperature and moisture variables are advected as separate tracers with sources that interact with the mean-flow through a simplified yet realistic bulk moist-thermodynamic phase-change model. This moist shallow water system provides a unique tool to assess the usually complex and highly non-linear dynamics–physics interactions in atmospheric models in a simple yet realistic way. The full non-linear shallow water equations are solved numerically on several case studies and the results suggest quite realistic interaction between the dynamics and physics and in particular the generation of cloud and rain. - Highlights: • Novel shallow water equations which retains moist processes are derived from the three-dimensional hydrostatic Boussinesq equations. • The new shallow water set can be seen as a more general one, where the classical equations are a special case of these equations. • This moist shallow water system naturally allows a feedback mechanism from the moist physics increments to the momentum via buoyancy. • Like full models, temperature and moistures are advected as tracers that interact through a simplified yet realistic phase-change model. • This model is a unique tool to test numerical methods for atmospheric models, and physics–dynamics coupling, in a very realistic and simple way.« less
Modification of 2-D Time-Domain Shallow Water Wave Equation using Asymptotic Expansion Method
NASA Astrophysics Data System (ADS)
Khairuman, Teuku; Nasruddin, MN; Tulus; Ramli, Marwan
2018-01-01
Generally, research on the tsunami wave propagation model can be conducted by using a linear model of shallow water theory, where a non-linear side on high order is ignored. In line with research on the investigation of the tsunami waves, the Boussinesq equation model underwent a change aimed to obtain an improved quality of the dispersion relation and non-linearity by increasing the order to be higher. To solve non-linear sides at high order is used a asymptotic expansion method. This method can be used to solve non linear partial differential equations. In the present work, we found that this method needs much computational time and memory with the increase of the number of elements.
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capela, Fabio; Ramazanov, Sabir, E-mail: fc403@cam.ac.uk, E-mail: Sabir.Ramazanov@ulb.ac.be
At large scales and for sufficiently early times, dark matter is described as a pressureless perfect fluid—dust— non-interacting with Standard Model fields. These features are captured by a simple model with two scalars: a Lagrange multiplier and another playing the role of the velocity potential. That model arises naturally in some gravitational frameworks, e.g., the mimetic dark matter scenario. We consider an extension of the model by means of higher derivative terms, such that the dust solutions are preserved at the background level, but there is a non-zero sound speed at the linear level. We associate this Modified Dust withmore » dark matter, and study the linear evolution of cosmological perturbations in that picture. The most prominent effect is the suppression of their power spectrum for sufficiently large cosmological momenta. This can be relevant in view of the problems that cold dark matter faces at sub-galactic scales, e.g., the missing satellites problem. At even shorter scales, however, perturbations of Modified Dust are enhanced compared to the predictions of more common particle dark matter scenarios. This is a peculiarity of their evolution in radiation dominated background. We also briefly discuss clustering of Modified Dust. We write the system of equations in the Newtonian limit, and sketch the possible mechanism which could prevent the appearance of caustic singularities. The same mechanism may be relevant in light of the core-cusp problem.« less
Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts
Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.
2013-01-01
To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080
Virtual Design of a Controller for a Hydraulic Cam Phasing System
NASA Astrophysics Data System (ADS)
Schneider, Markus; Ulbrich, Heinz
2010-09-01
Hydraulic vane cam phasing systems are nowadays widely used for improving the performance of combustion engines. At stationary operation, these systems should achieve a constant phasing angle, which however is badly disturbed by the alternating torque generated by the valve actuation. As the hydraulic system shows a non-linear characteristic over the full operation range and the inductivity of the hydraulic pipes generates a significant time delay, a full model based control emerges very complex. Therefore a simple feed-forward controller is designed, bridging the time delay of the hydraulic system and improving the system behaviour significantly.
An Alternative Derivation of the Energy Levels of the "Particle on a Ring" System
NASA Astrophysics Data System (ADS)
Vincent, Alan
1996-10-01
All acceptable wave functions must be continuous mathematical functions. This criterion limits the acceptable functions for a particle in a linear 1-dimensional box to sine functions. If, however, the linear box is bent round into a ring, acceptable wave functions are those which are continuous at the 'join'. On this model some acceptable linear functions become unacceptable for the ring and some unacceptable cosine functions become acceptable. This approach can be used to produce a straightforward derivation of the energy levels and wave functions of the particle on a ring. These simple wave mechanical systems can be used as models of linear and cyclic delocalised systems such as conjugated hydrocarbons or the benzene ring. The promotion energy of an electron can then be used to calculate the wavelength of absorption of uv light. The simple model gives results of the correct order of magnitude and shows that, as the chain length increases, the uv maximum moves to longer wavelengths, as found experimentally.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Kriener, Birgit; Helias, Moritz; Rotter, Stefan; Diesmann, Markus; Einevoll, Gaute T
2013-01-01
Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics, specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningful properties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily. When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supracritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- or fluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability. In particular, if neurons are mean-driven, the linearization has a very simple form and becomes independent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance are important parameters in the determination of the critical weight. We demonstrate that interestingly even in "intermediate" regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical coupling strength. We moreover analyze the effects of structural randomness by rewiring individual synapses or redistributing weights, as well as coarse-graining on the formation of inhomogeneous activity patterns.
Kriener, Birgit; Helias, Moritz; Rotter, Stefan; Diesmann, Markus; Einevoll, Gaute T.
2014-01-01
Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics, specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningful properties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily. When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supracritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- or fluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability. In particular, if neurons are mean-driven, the linearization has a very simple form and becomes independent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance are important parameters in the determination of the critical weight. We demonstrate that interestingly even in “intermediate” regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical coupling strength. We moreover analyze the effects of structural randomness by rewiring individual synapses or redistributing weights, as well as coarse-graining on the formation of inhomogeneous activity patterns. PMID:24501591
Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-09-01
Simulation of periodontal ligament (PDL) using non-linear finite element method (FEM) analysis gives better insight into understanding of the biology of tooth movement. The stresses in the PDL were evaluated for intrusion and lingual root torque using non-linear properties. A three-dimensional (3D) FEM model of the maxillary incisors was generated using Solidworks modeling software. Stresses in the PDL were evaluated for intrusive and lingual root torque movements by 3D FEM using ANSYS software. These stresses were compared with linear and non-linear analyses. For intrusive and lingual root torque movements, distribution of stress over the PDL was within the range of optimal stress value as proposed by Lee, but was exceeding the force system given by Proffit as optimum forces for orthodontic tooth movement with linear properties. When same force load was applied in non-linear analysis, stresses were more compared to linear analysis and were beyond the optimal stress range as proposed by Lee for both intrusive and lingual root torque. To get the same stress as linear analysis, iterations were done using non-linear properties and the force level was reduced. This shows that the force level required for non-linear analysis is lesser than that of linear analysis.
Non-linear Frequency Shifts, Mode Couplings, and Decay Instability of Plasma Waves
NASA Astrophysics Data System (ADS)
Affolter, Mathew; Anderegg, F.; Driscoll, C. F.; Valentini, F.
2015-11-01
We present experiments and theory for non-linear plasma wave decay to longer wavelengths, in both the oscillatory coupling and exponential decay regimes. The experiments are conducted on non-neutral plasmas in cylindrical Penning-Malmberg traps, θ-symmetric standing plasma waves have near acoustic dispersion ω (kz) ~kz - αkz2 , discretized by kz =mz (π /Lp) . Large amplitude waves exhibit non-linear frequency shifts δf / f ~A2 and Fourier harmonic content, both of which are increased as the plasma dispersion is reduced. Non-linear coupling rates are measured between large amplitude mz = 2 waves and small amplitude mz = 1 waves, which have a small detuning Δω = 2ω1 -ω2 . At small excitation amplitudes, this detuning causes the mz = 1 mode amplitude to ``bounce'' at rate Δω , with amplitude excursions ΔA1 ~ δn2 /n0 consistent with cold fluid theory and Vlasov simulations. At larger excitation amplitudes, where the non-linear coupling exceeds the dispersion, phase-locked exponential growth of the mz = 1 mode is observed, in qualitative agreement with simple 3-wave instability theory. However, significant variations are observed experimentally, and N-wave theory gives stunningly divergent predictions that depend sensitively on the dispersion-moderated harmonic content. Measurements on higher temperature Langmuir waves and the unusual ``EAW'' (KEEN) waves are being conducted to investigate the effects of wave-particle kinetics on the non-linear coupling rates. Department of Energy Grants DE-SC0002451and DE-SC0008693.
NASA Technical Reports Server (NTRS)
Hada, M.; George, Kerry; Cucinotta, Francis A.
2011-01-01
The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivors with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (1-20 cGy) of 170 MeV/u Si-28- ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving greater than 2 breaks in 2 or more chromosomes). The curves for doses above 10 cGy were fitted with linear or linear-quadratic functions. For Si-28- ions no dose response was observed in the 2-10 cGy dose range, suggesting a non-target effect in this range.
Circuit transients due to negative bias arcs-II. [on solar cell power systems in low earth orbit
NASA Technical Reports Server (NTRS)
Metz, R. N.
1986-01-01
Two new models of negative-bias arcing on a solar cell power system in Low Earth Orbit are presented. One is an extended, analytical model and the other is a non-linear, numerical model. The models are based on an earlier analytical model in which the interactions between solar cell interconnects and the space plasma as well as the parameters of the power circuit are approximated linearly. Transient voltages due to arcs struck at the negative thermal of the solar panel are calculated in the time domain. The new models treat, respectively, further linear effects within the solar panel load circuit and non-linear effects associated with the plasma interactions. Results of computer calculations with the models show common-mode voltage transients of the electrically floating solar panel struck by an arc comparable to the early model but load transients that differ substantially from the early model. In particular, load transients of the non-linear model can be more than twice as great as those of the early model and more than twenty times as great as the extended, linear model.
Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models
1998-03-01
for phase distortions due to noise which leads to less deblurring as noise increases [41]. In contrast, the vector Wiener filter incorporates some a...AFIT/DS/ENG/98- 06 Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models DISSERTATION Stephen D. Ford Captain...Dissertation 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS LINEAR RECONSTRUCTION OF NON-STATIONARY IMAGE ENSEMBLES INCORPORATING BLUR AND NOISE MODELS 6. AUTHOR(S
Longitudinal train dynamics model for a rail transit simulation system
Wang, Jinghui; Rakha, Hesham A.
2018-01-01
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Longitudinal train dynamics model for a rail transit simulation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jinghui; Rakha, Hesham A.
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
NASA Astrophysics Data System (ADS)
Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie
2018-02-01
Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.
Reliability Analysis of the Gradual Degradation of Semiconductor Devices.
1983-07-20
under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.
2017-08-01
The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.
NASA Astrophysics Data System (ADS)
Vitelli, Vincenzo
2012-02-01
Non-linear sound is an extreme phenomenon typically observed in solids after violent explosions. But granular media are different. Right when they unjam, these fragile and disordered solids exhibit vanishing elastic moduli and sound speed, so that even tiny mechanical perturbations form supersonic shocks. Here, we perform simulations in which two-dimensional jammed granular packings are continuously compressed, and demonstrate that the resulting excitations are strongly nonlinear shocks, rather than linear waves. We capture the full dependence of the shock speed on pressure and compression speed by a surprisingly simple analytical model. We also treat shear shocks within a simplified viscoelastic model of nearly-isostatic random networks comprised of harmonic springs. In this case, anharmonicity does not originate locally from nonlinear interactions between particles, as in granular media; instead, it emerges from the global architecture of the network. As a result, the diverging width of the shear shocks bears a nonlinear signature of the diverging isostatic length associated with the loss of rigidity in these floppy networks.
Hysteresis, regime shifts, and non-stationarity in aquifer recharge-storage-discharge systems
NASA Astrophysics Data System (ADS)
Klammler, Harald; Jawitz, James; Annable, Michael; Hatfield, Kirk; Rao, Suresh
2016-04-01
Based on physical principles and geological information we develop a parsimonious aquifer model for Silver Springs, one of the largest karst springs in Florida. The model structure is linear and time-invariant with recharge, aquifer head (storage) and spring discharge as dynamic variables at the springshed (landscape) scale. Aquifer recharge is the hydrological driver with trends over a range of time scales from seasonal to multi-decadal. The freshwater-saltwater interaction is considered as a dynamic storage mechanism. Model results and observed time series show that aquifer storage causes significant rate-dependent hysteretic behavior between aquifer recharge and discharge. This leads to variable discharge per unit recharge over time scales up to decades, which may be interpreted as a gradual and cyclic regime shift in the aquifer drainage behavior. Based on field observations, we further amend the aquifer model by assuming vegetation growth in the spring run to be inversely proportional to stream velocity and to hinder stream flow. This simple modification introduces non-linearity into the dynamic system, for which we investigate the occurrence of rate-independent hysteresis and of different possible steady states with respective regime shifts between them. Results may contribute towards explaining observed non-stationary behavior potentially due to hydrological regime shifts (e.g., triggered by gradual, long-term changes in recharge or single extreme events) or long-term hysteresis (e.g., caused by aquifer storage). This improved understanding of the springshed hydrologic response dynamics is fundamental for managing the ecological, economic and social aspects at the landscape scale.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.
Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul
2015-01-01
Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.
Fish robotics and hydrodynamics
NASA Astrophysics Data System (ADS)
Lauder, George
2010-11-01
Studying the fluid dynamics of locomotion in freely-swimming fishes is challenging due to difficulties in controlling fish behavior. To provide better control over fish-like propulsive systems we have constructed a variety of fish-like robotic test platforms that range from highly biomimetic models of fins, to simple physical models of body movements during aquatic locomotion. First, we have constructed a series of biorobotic models of fish pectoral fins with 5 fin rays that allow detailed study of fin motion, forces, and fluid dynamics associated with fin-based locomotion. We find that by tuning fin ray stiffness and the imposed motion program we can produce thrust both on the fin outstroke and instroke. Second, we are using a robotic flapping foil system to study the self-propulsion of flexible plastic foils of varying stiffness, length, and trailing edge shape as a means of investigating the fluid dynamic effect of simple changes in the properties of undulating bodies moving through water. We find unexpected non-linear stiffness-dependent effects of changing foil length on self-propelled speed, and as well as significant effects of trailing edge shape on foil swimming speed.
Model Capabilities | Regional Energy Deployment System Model | Energy
representation of those effects throughout the scenario. Because those effects are highly non-linear and other models, limited foresight, price penalties for rapid growth, and other non-linear effects
Non-linear models for the detection of impaired cerebral blood flow autoregulation
Miranda, Rodrigo; Katsogridakis, Emmanuel
2018-01-01
The ability to discriminate between normal and impaired dynamic cerebral autoregulation (CA), based on measurements of spontaneous fluctuations in arterial blood pressure (BP) and cerebral blood flow (CBF), has considerable clinical relevance. We studied 45 normal subjects at rest and under hypercapnia induced by breathing a mixture of carbon dioxide and air. Non-linear models with BP as input and CBF velocity (CBFV) as output, were implemented with support vector machines (SVM) using separate recordings for learning and validation. Dynamic SVM implementations used either moving average or autoregressive structures. The efficiency of dynamic CA was estimated from the model’s derived CBFV response to a step change in BP as an autoregulation index for both linear and non-linear models. Non-linear models with recurrences (autoregressive) showed the best results, with CA indexes of 5.9 ± 1.5 in normocapnia, and 2.5 ± 1.2 for hypercapnia with an area under the receiver-operator curve of 0.955. The high performance achieved by non-linear SVM models to detect deterioration of dynamic CA should encourage further assessment of its applicability to clinical conditions where CA might be impaired. PMID:29381724
Passive dendrites enable single neurons to compute linearly non-separable functions.
Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris
2013-01-01
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.
Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions
Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris
2013-01-01
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions. PMID:23468600
Seasonal Synchronization of a Simple Stochastic Dynamical Model Capturing El Niño Diversity
NASA Astrophysics Data System (ADS)
Thual, S.; Majda, A.; Chen, N.
2017-12-01
The El Niño-Southern Oscillation (ENSO) has significant impact on global climate and seasonal prediction. Recently, a simple ENSO model was developed that automatically captures the ENSO diversity and intermittency in nature, where state-dependent stochastic wind bursts and nonlinear advection of sea surface temperature (SST) are coupled to simple ocean-atmosphere processes that are otherwise deterministic, linear and stable. In the present article, it is further shown that the model can reproduce qualitatively the ENSO synchronization (or phase-locking) to the seasonal cycle in nature. This goal is achieved by incorporating a cloud radiative feedback that is derived naturally from the model's atmosphere dynamics with no ad-hoc assumptions and accounts in simple fashion for the marked seasonal variations of convective activity and cloud cover in the eastern Pacific. In particular, the weak convective response to SSTs in boreal fall favors the eastern Pacific warming that triggers El Niño events while the increased convective activity and cloud cover during the following spring contributes to the shutdown of those events by blocking incoming shortwave solar radiations. In addition to simulating the ENSO diversity with realistic non-Gaussian statistics in different Niño regions, both the eastern Pacific moderate and super El Niño, the central Pacific El Niño as well as La Niña show a realistic chronology with a tendency to peak in boreal winter as well as decreased predictability in spring consistent with the persistence barrier in nature. The incorporation of other possible seasonal feedbacks in the model is also documented for completeness.
Mode Identification of High-Amplitude Pressure Waves in Liquid Rocket Engines
NASA Astrophysics Data System (ADS)
EBRAHIMI, R.; MAZAHERI, K.; GHAFOURIAN, A.
2000-01-01
Identification of existing instability modes from experimental pressure measurements of rocket engines is difficult, specially when steep waves are present. Actual pressure waves are often non-linear and include steep shocks followed by gradual expansions. It is generally believed that interaction of these non-linear waves is difficult to analyze. A method of mode identification is introduced. After presumption of constituent modes, they are superposed by using a standard finite difference scheme for solution of the classical wave equation. Waves are numerically produced at each end of the combustion tube with different wavelengths, amplitudes, and phases with respect to each other. Pressure amplitude histories and phase diagrams along the tube are computed. To determine the validity of the presented method for steep non-linear waves, the Euler equations are numerically solved for non-linear waves, and negligible interactions between these waves are observed. To show the applicability of this method, other's experimental results in which modes were identified are used. Results indicate that this simple method can be used in analyzing complicated pressure signal measurements.
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
Phelps, G.A.
2008-01-01
This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.
Boulder Dislodgement by Tsunamis and Storms: Version 2.0
NASA Astrophysics Data System (ADS)
Weiss, Robert
2016-04-01
In the past, boulder dislodgement by tsunami and storm waves has been approached with a simple threshold approach in which a boulder was moved if the sum of the acting forces on the boulder is larger than zero. The impulse theory taught us, however, that this criterion is not enough to explain particle dislodgement. We employ an adapted version of the Newton's Second Law of Motion (NSLM) in order to consider the essence of the impulse theory which is that the sum of the forces has to exceed a certain threshold for a certain period of time. Furthermore, a classical assumption is to consider linear waves. However, when waves travel toward the shore, they alter due to non-linear processes. We employ the TRIADS model to quantify that change and how it impacts boulder dislodgement. We present our results of the coupled model (adapted NSLM and TRIADS model). The results project a more complex picture of boulder transport by storms and tsunami. The following question arises: What information do we actually invert, and what does it tell us about the causative event?
Development of statistical linear regression model for metals from transportation land uses.
Maniquiz, Marla C; Lee, Soyoung; Lee, Eunju; Kim, Lee-Hyung
2009-01-01
The transportation landuses possessing impervious surfaces such as highways, parking lots, roads, and bridges were recognized as the highly polluted non-point sources (NPSs) in the urban areas. Lots of pollutants from urban transportation are accumulating on the paved surfaces during dry periods and are washed-off during a storm. In Korea, the identification and monitoring of NPSs still represent a great challenge. Since 2004, the Ministry of Environment (MOE) has been engaged in several researches and monitoring to develop stormwater management policies and treatment systems for future implementation. The data over 131 storm events during May 2004 to September 2008 at eleven sites were analyzed to identify correlation relationships between particulates and metals, and to develop simple linear regression (SLR) model to estimate event mean concentration (EMC). Results indicate that there was no significant relationship between metals and TSS EMC. However, the SLR estimation models although not providing useful results are valuable indicators of high uncertainties that NPS pollution possess. Therefore, long term monitoring employing proper methods and precise statistical analysis of the data should be undertaken to eliminate these uncertainties.
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Electro-Optic Beam Steering Using Non-Linear Organic Materials
1993-08-01
York (SUNY), Buffalo, for potential application to the Hughes electro - optic beam deflector device. Evaluations include electro - optic coefficient...response time, transmission, and resistivity. Electro - optic coefficient measurements were made at 633 nm using a simple reflection technique. The
PHYSICS REQUIRES A SIMPLE LOW MACH NUMBER FLOW TO BE COMPRESSIBLE
Radial, laminar, plane, low velocity flow represents the simplest, non-linear fluid dynamics problem. Ostensibly this apparently trivial flow could be solved using the incompressible Navier-Stokes equations, universally believed to be adequate for such problems. Most researchers ...
Size effects in non-linear heat conduction with flux-limited behaviors
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2017-11-01
Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.
Gain optimization with non-linear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1984-01-01
An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.
Perfect commuting-operator strategies for linear system games
NASA Astrophysics Data System (ADS)
Cleve, Richard; Liu, Li; Slofstra, William
2017-01-01
Linear system games are a generalization of Mermin's magic square game introduced by Cleve and Mittal. They show that perfect strategies for linear system games in the tensor-product model of entanglement correspond to finite-dimensional operator solutions of a certain set of non-commutative equations. We investigate linear system games in the commuting-operator model of entanglement, where Alice and Bob's measurement operators act on a joint Hilbert space, and Alice's operators must commute with Bob's operators. We show that perfect strategies in this model correspond to possibly infinite-dimensional operator solutions of the non-commutative equations. The proof is based around a finitely presented group associated with the linear system which arises from the non-commutative equations.
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi
2017-01-01
The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO2 emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO2 emissions is significantly higher than those of GDPpc and Es on per capita CO2 emissions. PMID:29236083
Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi
2017-12-13
The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO₂ emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO₂ emissions is significantly higher than those of GDPpc and Es on per capita CO₂ emissions.
Fourier imaging of non-linear structure formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk
We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important,more » and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.« less
Weighted straight skeletons in the plane☆
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-01-01
We investigate weighted straight skeletons from a geometric, graph-theoretical, and combinatorial point of view. We start with a thorough definition and shed light on some ambiguity issues in the procedural definition. We investigate the geometry, combinatorics, and topology of faces and the roof model, and we discuss in which cases a weighted straight skeleton is connected. Finally, we show that the weighted straight skeleton of even a simple polygon may be non-planar and may contain cycles, and we discuss under which restrictions on the weights and/or the input polygon the weighted straight skeleton still behaves similar to its unweighted counterpart. In particular, we obtain a non-procedural description and a linear-time construction algorithm for the straight skeleton of strictly convex polygons with arbitrary weights. PMID:25648398
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2015-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.
NASA Astrophysics Data System (ADS)
You, Gexin; Liu, Xinsen; Chen, Xiri; Yang, Bo; Zhou, Xiuwen
2018-06-01
In this study, a two-element model consisting of a non-linear spring and a viscous dashpot was proposed to simulate tensile curve of polyurethane fibers. The results showed that the two-element model can simulate the tensile curve of the polyurethane fibers better with a simple and applicable feature compared to the existing three-element model and four-element model. The effects of isocyanate index (R) on the hydrogen bond (H-bond) and the micro-phase separation of polyurethane fibers were investigated by Fourier transform infrared spectroscopy and x-ray pyrometer, respectively. The degree of H-bond and micro-phase separation increased first and then decreased as the R value increased, and gain a maximum at the value of 1.76, which is in good agreement with parameters viscosity coefficient η and the initial modulus c in the model.
NASA Astrophysics Data System (ADS)
Lemus-Mondaca, Roberto A.; Vega-Gálvez, Antonio; Zambra, Carlos E.; Moraga, Nelson O.
2017-01-01
A 3D model considering heat and mass transfer for food dehydration inside a direct contact dryer is studied. The k- ɛ model is used to describe turbulent air flow. The samples thermophysical properties as density, specific heat, and thermal conductivity are assumed to vary non-linearly with temperature. FVM, SIMPLE algorithm based on a FORTRAN code are used. Results unsteady velocity, temperature, moisture, kinetic energy and dissipation rate for the air flow are presented, whilst temperature and moisture values for the food also are presented. The validation procedure includes a comparison with experimental and numerical temperature and moisture content results obtained from experimental data, reaching a deviation 7-10 %. In addition, this turbulent k- ɛ model provided a better understanding of the transport phenomenon inside the dryer and sample.
Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme
2015-01-01
The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dynamics of tokamak plasma surface current in 3D ideal MHD model
NASA Astrophysics Data System (ADS)
Galkin, Sergei A.; Svidzinski, V. A.; Zakharov, L. E.
2013-10-01
Interest in the surface current which can arise on perturbed sharp plasma vacuum interface in tokamaks was recently generated by a few papers (see and references therein). In dangerous disruption events with plasma-touching-wall scenarios, the surface current can be shared with the wall leading to the strong, damaging forces acting on the wall A relatively simple analytic definition of δ-function surface current proportional to a jump of tangential component of magnetic field nevertheless leads to a complex computational problem on the moving plasma-vacuum interface, requiring the incorporation of non-linear 3D plasma dynamics even in one-fluid ideal MHD. The Disruption Simulation Code (DSC), which had recently been developed in a fully 3D toroidal geometry with adaptation to the moving plasma boundary, is an appropriate tool for accurate self-consistent δfunction surface current calculation. Progress on the DSC-3D development will be presented. Self-consistent surface current calculation under non-linear dynamics of low m kink mode and VDE will be discussed. Work is supported by the US DOE SBIR grant #DE-SC0004487.
Selecting exposure measures in crash rate prediction for two-lane highway segments.
Qin, Xiao; Ivan, John N; Ravishanker, Nalini
2004-03-01
A critical part of any risk assessment is identifying how to represent exposure to the risk involved. Recent research shows that the relationship between crash count and traffic volume is non-linear; consequently, a simple crash rate computed as the ratio of crash count to volume is not proper for comparing the safety of sites with different traffic volumes. To solve this problem, we describe a new approach for relating traffic volume and crash incidence. Specifically, we disaggregate crashes into four types: (1) single-vehicle, (2) multi-vehicle same direction, (3) multi-vehicle opposite direction, and (4) multi-vehicle intersecting, and define candidate exposure measures for each that we hypothesize will be linear with respect to each crash type. This paper describes initial investigation using crash and physical characteristics data for highway segments in Michigan from the Highway Safety Information System (HSIS). We use zero-inflated-Poisson (ZIP) modeling to estimate models for predicting counts for each of the above crash types as a function of the daily volume, segment length, speed limit and roadway width. We found that the relationship between crashes and the daily volume (AADT) is non-linear and varies by crash type, and is significantly different from the relationship between crashes and segment length for all crash types. Our research will provide information to improve accuracy of crash predictions and, thus, facilitate more meaningful comparison of the safety record of seemingly similar highway locations.
NASA Astrophysics Data System (ADS)
Zielnica, J.; Ziółkowski, A.; Cempel, C.
2003-03-01
Design and theoretical and experimental investigation of vibroisolation pads with non-linear static and dynamic responses is the objective of the paper. The analytical investigations are based on non-linear finite element analysis where the load-deflection response is traced against the shape and material properties of the analysed model of the vibroisolation pad. A new model of vibroisolation pad of antisymmetrical type was designed and analysed by the finite element method based on the second-order theory (large displacements and strains) with the assumption of material's non-linearities (Mooney-Rivlin model). Stability loss phenomenon was used in the design of the vibroisolators, and it was proved that it would be possible to design a model of vibroisolator in the form of a continuous pad with non-linear static and dynamic response, typical to vibroisolation purposes. The materials used for the vibroisolator are those of rubber, elastomers, and similar ones. The results of theoretical investigations were examined experimentally. A series of models made of soft rubber were designed for the test purposes. The experimental investigations of the vibroisolation models, under static and dynamic loads, confirmed the results of the FEM analysis.
Supersymmetry and fermionic modes in an oscillon background
NASA Astrophysics Data System (ADS)
Correa, R. A. C.; Ospedal, L. P. R.; de Paula, W.; Helayël-Neto, J. A.
2018-05-01
The excitations referred to as oscillons are long-lived time-dependent field configurations which emerge dynamically from non-linear field theories. Such long-lived solutions are of interest in applications that include systems of Condensed Matter Physics, the Standard Model of Particle Physics, Lorentz-symmetry violating scenarios and Cosmology. In this work, we show how oscillons may be accommodated in a supersymmetric scenario. We adopt as our framework simple (N = 1) supersymmetry in D = 1 + 1 dimensions. We focus on the bosonic sector with oscillon configurations and their (classical) effects on the corresponding fermionic modes, (supersymmetric) partners of the oscillons. The particular model we adopt to pursue our investigation displays cubic superfield which, in the physical scalar sector, corresponds to the usual quartic self-coupling.
NASA Astrophysics Data System (ADS)
Vincenzo, F.; Matteucci, F.; Spitoni, E.
2017-04-01
We present a theoretical method for solving the chemical evolution of galaxies by assuming an instantaneous recycling approximation for chemical elements restored by massive stars and the delay time distribution formalism for delayed chemical enrichment by Type Ia Supernovae. The galaxy gas mass assembly history, together with the assumed stellar yields and initial mass function, represents the starting point of this method. We derive a simple and general equation, which closely relates the Laplace transforms of the galaxy gas accretion history and star formation history, which can be used to simplify the problem of retrieving these quantities in the galaxy evolution models assuming a linear Schmidt-Kennicutt law. We find that - once the galaxy star formation history has been reconstructed from our assumptions - the differential equation for the evolution of the chemical element X can be suitably solved with classical methods. We apply our model to reproduce the [O/Fe] and [Si/Fe] versus [Fe/H] chemical abundance patterns as observed at the solar neighbourhood by assuming a decaying exponential infall rate of gas and different delay time distributions for Type Ia Supernovae; we also explore the effect of assuming a non-linear Schmidt-Kennicutt law, with the index of the power law being k = 1.4. Although approximate, we conclude that our model with the single-degenerate scenario for Type Ia Supernovae provides the best agreement with the observed set of data. Our method can be used by other complementary galaxy stellar population synthesis models to predict also the chemical evolution of galaxies.
NASA Astrophysics Data System (ADS)
Millar, R.; Ingram, W.; Allen, M. R.; Lowe, J.
2013-12-01
Temperature and precipitation patterns are the climate variables with the greatest impacts on both natural and human systems. Due to the small spatial scales and the many interactions involved in the global hydrological cycle, in general circulation models (GCMs) representations of precipitation changes are subject to considerable uncertainty. Quantifying and understanding the causes of uncertainty (and identifying robust features of predictions) in both global and local precipitation change is an essential challenge of climate science. We have used the huge distributed computing capacity of the climateprediction.net citizen science project to examine parametric uncertainty in an ensemble of 20,000 perturbed-physics versions of the HadCM3 general circulation model. The ensemble has been selected to have a control climate in top-of-atmosphere energy balance [Yamazaki et al. 2013, J.G.R.]. We force this ensemble with several idealised climate-forcing scenarios including carbon dioxide step and transient profiles, solar radiation management geoengineering experiments with stratospheric aerosols, and short-lived climate forcing agents. We will present the results from several of these forcing scenarios under GCM parametric uncertainty. We examine the global mean precipitation energy budget to understand the robustness of a simple non-linear global precipitation model [Good et al. 2012, Clim. Dyn.] as a better explanation of precipitation changes in transient climate projections under GCM parametric uncertainty than a simple linear tropospheric energy balance model. We will also present work investigating robust conclusions about precipitation changes in a balanced ensemble of idealised solar radiation management scenarios [Kravitz et al. 2011, Atmos. Sci. Let.].
NASA Astrophysics Data System (ADS)
Karandish, Fatemeh; Šimůnek, Jiří
2016-12-01
Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a better choice for predicting SWCs with lower uncertainties when required data are available, and thus for designing water saving strategies for agriculture and for other environmental applications requiring estimates of SWCs.
NASA Astrophysics Data System (ADS)
Kuroda, Daniel; Fufler, Kristen
Lithium-ion batteries have become ubiquitous to the portable energy storage industry, but efficiency issues still remain. Currently, most technological and scientific efforts are focused on the electrodes with little attention on the electrolyte. For example, simple fundamental questions about the lithium ion solvation shell composition in commercially used electrolytes have not been answered. Using a combination of linear and non-linear IR spectroscopies and theoretical calculations, we have carried out a thorough investigation of the solvation structure and dynamics of the lithium ion in various linear and cyclic carbonates at common battery electrolyte concentrations. Our studies show that carbonates coordinate the lithium ion tetrahedrally. They also reveal that linear and cyclic carbonates have contrasting dynamics in which cyclic carbonates present the most ordered structure. Finally, our experiments demonstrate that simple structural modifications in the linear carbonates impact significantly the microscopic interactions of the system. The stark differences in the solvation structure and dynamics among different carbonates reveal previously unknown details about the molecular level picture of these systems.
Processing-optimised imaging of analog geological models by electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Ortiz Alemán, C.; Espíndola-Carmona, A.; Hernández-Gómez, J. J.; Orozco Del Castillo, MG
2017-06-01
In this work, the electrical capacitance tomography (ECT) technique is applied in monitoring internal deformation of geological analog models, which are used to study structural deformation mechanisms, in particular for simulating migration and emplacement of allochtonous salt bodies. A rectangular ECT sensor was used for internal visualization of analog geologic deformation. The monitoring of analog models consists in the reconstruction of permittivity images from the capacitance measurements obtained by introducing the model inside the ECT sensor. A simulated annealing (SA) algorithm is used as a reconstruction method, and is optimized by taking full advantage of some special features in a linearized version of this inverse approach. As a second part of this work our SA image reconstruction algorithm is applied to synthetic models, where its performance is evaluated in comparison to other commonly used algorithms such as linear back-projection and iterative Landweber methods. Finally, the SA method is applied to visualise two simple geological analog models. Encouraging results were obtained in terms of the quality of the reconstructed images, as interfaces corresponding to main geological units in the analog model were clearly distinguishable in them. We found reliable results quite useful for real time non-invasive monitoring of internal deformation of analog geological models.
NASA Astrophysics Data System (ADS)
Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe
2018-01-01
In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.
Samuel, Michael D.; Richards, Bryan J.; Storm, Daniel J.; Rolley, Robert E.; Shelton, Paul; Nicholas S. Keuler,; Timothy R. Van Deelen,
2013-01-01
Host-parasite dynamics and strategies for managing infectious diseases of wildlife depend on the functional relationship between disease transmission rates and host density. However, the disease transmission function is rarely known for free-living wildlife, leading to uncertainty regarding the impacts of diseases on host populations and effective control actions. We evaluated the influence of deer density, landscape features, and soil clay content on transmission of chronic wasting disease (CWD) in young (<2-year-old) white-tailed deer (Odocoileus virginianus) in south-central Wisconsin, USA. We evaluated how frequency-dependent, density-dependent, and intermediate transmission models predicted CWD incidence rates in harvested yearling deer. An intermediate transmission model, incorporating both disease prevalence and density of infected deer, performed better than simple density- and frequency-dependent models. Our results indicate a combination of social structure, non-linear relationships between infectious contact and deer density, and distribution of disease among groups are important factors driving CWD infection in young deer. The landscape covariates % deciduous forest cover and forest edge density also were positively associated with infection rates, but soil clay content had no measurable influences on CWD transmission. Lack of strong density-dependent transmission rates indicates that controlling CWD by reducing deer density will be difficult. The consequences of non-linear disease transmission and aggregation of disease on cervid populations deserves further consideration.
Interpretation of commonly used statistical regression models.
Kasza, Jessica; Wolfe, Rory
2014-01-01
A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
Ramo, Nicole L.; Puttlitz, Christian M.
2018-01-01
Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558
A simple theory of motor protein kinetics and energetics. II.
Qian, H
2000-01-10
A three-state stochastic model of motor protein [Qian, Biophys. Chem. 67 (1997) pp. 263-267] is further developed to illustrate the relationship between the external load on an individual motor protein in aqueous solution with various ATP concentrations and its steady-state velocity. A wide variety of dynamic motor behavior are obtained from this simple model. For the particular case of free-load translocation being the most unfavorable step within the hydrolysis cycle, the load-velocity curve is quasi-linear, V/Vmax = (cF/Fmax-c)/(1-c), in contrast to the hyperbolic relationship proposed by A.V. Hill for macroscopic muscle. Significant deviation from the linearity is expected when the velocity is less than 10% of its maximal (free-load) value--a situation under which the processivity of motor diminishes and experimental observations are less certain. We then investigate the dependence of load-velocity curve on ATP (ADP) concentration. It is shown that the free load Vmax exhibits a Michaelis-Menten like behavior, and the isometric Fmax increases linearly with ln([ATP]/[ADP]). However, the quasi-linear region is independent of the ATP concentration, yielding an apparently ATP-independent maximal force below the true isometric force. Finally, the heat production as a function of ATP concentration and external load are calculated. In simple terms and solved with elementary algebra, the present model provides an integrated picture of biochemical kinetics and mechanical energetics of motor proteins.
Quantum monodromy and quantum phase transitions in floppy molecules
NASA Astrophysics Data System (ADS)
Larese, Danielle
2012-10-01
A simple algebraic Hamiltonian has been used to explore the vibrational and rotational spectra of the skeletal bending modes of HCNO, BrCNO, NCNCS, and other "floppy" (quasi-linear or quasi-bent) molecules. These molecules have large-amplitude, low-energy bending modes and champagne-bottle potential surfaces, making them good candidates for observing quantum phase transitions (QPT). We describe the geometric phase transitions from bent to linear in these and other non-rigid molecules, quantitatively analyzing the spectroscopic signatures of ground state QPT, excited state QPT, and quantum monodromy. The algebraic framework is ideal for this work because of its small calculational effort yet robust results. Although these methods have historically found success with tri-and four-atomic molecules, we now address five-atomic and simple branched molecules such as CH3NCO and GeH3NCO. Extraction of potential functions are completed for several molecules, resulting in predictions of barriers to linearity and equilibrium bond angles.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
Molenaar, Dylan; Bolsinova, Maria
2017-05-01
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
Wu, Baolin
2006-02-15
Differential gene expression detection and sample classification using microarray data have received much research interest recently. Owing to the large number of genes p and small number of samples n (p > n), microarray data analysis poses big challenges for statistical analysis. An obvious problem owing to the 'large p small n' is over-fitting. Just by chance, we are likely to find some non-differentially expressed genes that can classify the samples very well. The idea of shrinkage is to regularize the model parameters to reduce the effects of noise and produce reliable inferences. Shrinkage has been successfully applied in the microarray data analysis. The SAM statistics proposed by Tusher et al. and the 'nearest shrunken centroid' proposed by Tibshirani et al. are ad hoc shrinkage methods. Both methods are simple, intuitive and prove to be useful in empirical studies. Recently Wu proposed the penalized t/F-statistics with shrinkage by formally using the (1) penalized linear regression models for two-class microarray data, showing good performance. In this paper we systematically discussed the use of penalized regression models for analyzing microarray data. We generalize the two-class penalized t/F-statistics proposed by Wu to multi-class microarray data. We formally derive the ad hoc shrunken centroid used by Tibshirani et al. using the (1) penalized regression models. And we show that the penalized linear regression models provide a rigorous and unified statistical framework for sample classification and differential gene expression detection.
NASA Astrophysics Data System (ADS)
Arratia, Cristobal
2014-11-01
A simple construction will be shown, which reveals a general property satisfied by the evolution in time of a state vector composed by a superposition of orthogonal eigenmodes of a linear dynamical system. This property results from the conservation of the inner product between such state vectors evolving forward and backwards in time, and it can be simply evaluated from the state vector and its first and second time derivatives. This provides an efficient way to characterize, instantaneously along any specific phase-space trajectory of the linear system, the relevance of the non-normality of the linearized Navier-Stokes operator on the energy (or any other norm) gain or decay of small perturbations. Examples of this characterization applied to stationary or time dependent base flows will be shown. CONICYT, Concurso de Apoyo al Retorno de Investigadores del Extranjero, folio 821320055.
Symmetry-preserving perturbations of the Bateman Lagrangian and dissipative systems
NASA Astrophysics Data System (ADS)
Campoamor-Stursberg, Rutwig
2017-03-01
Perturbations of the classical Bateman Lagrangian preserving a certain subalgebra of Noether symmetries are studied, and conservative perturbations are characterized by the Lie algebra sl(2, ℝ) ⊕ so(2). Non-conservative albeit integrable perturbations are determined by the simple Lie algebra sl(2,ℝ), showing further the relation of the corresponding non-linear systems with the notion of generalized Ermakov systems.
Symmetry-preserving perturbations of the Bateman Lagrangian and dissipative systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campoamor-Stursberg, Rutwig, E-mail: rutwig@ucm.es
Perturbations of the classical Bateman Lagrangian preserving a certain subalgebra of Noether symmetries are studied, and conservative perturbations are characterized by the Lie algebra sl(2, ℝ) ⊕ so(2). Non-conservative albeit integrable perturbations are determined by the simple Lie algebra sl(2,ℝ), showing further the relation of the corresponding non-linear systems with the notion of generalized Ermakov systems.
Prediction of wastewater treatment plants performance based on artificial fish school neural network
NASA Astrophysics Data System (ADS)
Zhang, Ruicheng; Li, Chong
2011-10-01
A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.
NASA Astrophysics Data System (ADS)
Bertolesi, Elisa; Milani, Gabriele; Poggi, Carlo
2016-12-01
Two FE modeling techniques are presented and critically discussed for the non-linear analysis of tuff masonry panels reinforced with FRCM and subjected to standard diagonal compression tests. The specimens, tested at the University of Naples (Italy), are unreinforced and FRCM retrofitted walls. The extensive characterization of the constituent materials allowed adopting here very sophisticated numerical modeling techniques. In particular, here the results obtained by means of a micro-modeling strategy and homogenization approach are compared. The first modeling technique is a tridimensional heterogeneous micro-modeling where constituent materials (bricks, joints, reinforcing mortar and reinforcing grid) are modeled separately. The second approach is based on a two-step homogenization procedure, previously developed by the authors, where the elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. The non-linear structural analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM). All the simulations here presented are performed using the commercial software Abaqus. Pros and cons of the two approaches are herein discussed with reference to their reliability in reproducing global force-displacement curves and crack patterns, as well as to the rather different computational effort required by the two strategies.
Renormalizing a viscous fluid model for large scale structure formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Führer, Florian; Rigopoulos, Gerasimos, E-mail: fuhrer@thphys.uni-heidelberg.de, E-mail: gerasimos.rigopoulos@ncl.ac.uk
2016-02-01
Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher ordermore » vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.« less
Normalization of cell responses in cat striate cortex
NASA Technical Reports Server (NTRS)
Heeger, D. J.
1992-01-01
Simple cells in the striate cortex have been depicted as half-wave-rectified linear operators. Complex cells have been depicted as energy mechanisms, constructed from the squared sum of the outputs of quadrature pairs of linear operators. However, the linear/energy model falls short of a complete explanation of striate cell responses. In this paper, a modified version of the linear/energy model is presented in which striate cells mutually inhibit one another, effectively normalizing their responses with respect to stimulus contrast. This paper reviews experimental measurements of striate cell responses, and shows that the new model explains a significantly larger body of physiological data.
NASA Astrophysics Data System (ADS)
Ockenden, Mary C.; Tych, Wlodek; Beven, Keith J.; Collins, Adrian L.; Evans, Robert; Falloon, Peter D.; Forber, Kirsty J.; Hiscock, Kevin M.; Hollaway, Michael J.; Kahana, Ron; Macleod, Christopher J. A.; Villamizar, Martha L.; Wearing, Catherine; Withers, Paul J. A.; Zhou, Jian G.; Benskin, Clare McW. H.; Burke, Sean; Cooper, Richard J.; Freer, Jim E.; Haygarth, Philip M.
2017-12-01
Excess nutrients in surface waters, such as phosphorus (P) from agriculture, result in poor water quality, with adverse effects on ecological health and costs for remediation. However, understanding and prediction of P transfers in catchments have been limited by inadequate data and over-parameterised models with high uncertainty. We show that, with high temporal resolution data, we are able to identify simple dynamic models that capture the P load dynamics in three contrasting agricultural catchments in the UK. For a flashy catchment, a linear, second-order (two pathways) model for discharge gave high simulation efficiencies for short-term storm sequences and was useful in highlighting uncertainties in out-of-bank flows. A model with non-linear rainfall input was appropriate for predicting seasonal or annual cumulative P loads where antecedent conditions affected the catchment response. For second-order models, the time constant for the fast pathway varied between 2 and 15 h for all three catchments and for both discharge and P, confirming that high temporal resolution data are necessary to capture the dynamic responses in small catchments (10-50 km2). The models led to a better understanding of the dominant nutrient transfer modes, which will be helpful in determining phosphorus transfers following changes in precipitation patterns in the future.
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
Non-linear dynamic analysis of geared systems, part 2
NASA Technical Reports Server (NTRS)
Singh, Rajendra; Houser, Donald R.; Kahraman, Ahmet
1990-01-01
A good understanding of the steady state dynamic behavior of a geared system is required in order to design reliable and quiet transmissions. This study focuses on a system containing a spur gear pair with backlash and periodically time-varying mesh stiffness, and rolling element bearings with clearance type non-linearities. A dynamic finite element model of the linear time-invariant (LTI) system is developed. Effects of several system parameters, such as torsional and transverse flexibilities of the shafts and prime mover/load inertias, on free and force vibration characteristics are investigated. Several reduced order LTI models are developed and validated by comparing their eigen solution with the finite element model results. Several key system parameters such as mean load and damping ratio are identified and their effects on the non-linear frequency response are evaluated quantitatively. Other fundamental issues such as the dynamic coupling between non-linear modes, dynamic interactions between component non-linearities and time-varying mesh stiffness, and the existence of subharmonic and chaotic solutions including routes to chaos have also been examined in depth.
Thermo-optical dynamics in an optically pumped Photonic Crystal nano-cavity.
Brunstein, M; Braive, R; Hostein, R; Beveratos, A; Rober-Philip, I; Sagnes, I; Karle, T J; Yacomotti, A M; Levenson, J A; Moreau, V; Tessier, G; De Wilde, Y
2009-09-14
Linear and non-linear thermo-optical dynamical regimes were investigated in a photonic crystal cavity. First, we have measured the thermal relaxation time in an InP-based nano-cavity with quantum dots in the presence of optical pumping. The experimental method presented here allows one to obtain the dynamics of temperature in a nanocavity based on reflectivity measurements of a cw probe beam coupled through an adiabatically tapered fiber. Characteristic times of 1.0+/-0.2 micros and 0.9+/-0.2 micros for the heating and the cooling processes were obtained. Finally, thermal dynamics were also investigated in a thermo-optical bistable regime. Switch-on/off times of 2 micros and 4 micros respectively were measured, which could be explained in terms of a simple non-linear dynamical representation.
Non-Linear Finite Element Modeling of THUNDER Piezoelectric Actuators
NASA Technical Reports Server (NTRS)
Taleghani, Barmac K.; Campbell, Joel F.
1999-01-01
A NASTRAN non-linear finite element model has been developed for predicting the dome heights of THUNDER (THin Layer UNimorph Ferroelectric DrivER) piezoelectric actuators. To analytically validate the finite element model, a comparison was made with a non-linear plate solution using Von Karmen's approximation. A 500 volt input was used to examine the actuator deformation. The NASTRAN finite element model was also compared with experimental results. Four groups of specimens were fabricated and tested. Four different input voltages, which included 120, 160, 200, and 240 Vp-p with a 0 volts offset, were used for this comparison.
Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.
Dinpajooh, Mohammadhasan; Matyushov, Dmitry V
2014-07-17
Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.
HOW GALACTIC ENVIRONMENT REGULATES STAR FORMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meidt, Sharon E.
2016-02-10
In a new simple model I reconcile two contradictory views on the factors that determine the rate at which molecular clouds form stars—internal structure versus external, environmental influences—providing a unified picture for the regulation of star formation in galaxies. In the presence of external pressure, the pressure gradient set up within a self-gravitating turbulent (isothermal) cloud leads to a non-uniform density distribution. Thus the local environment of a cloud influences its internal structure. In the simple equilibrium model, the fraction of gas at high density in the cloud interior is determined simply by the cloud surface density, which is itselfmore » inherited from the pressure in the immediate surroundings. This idea is tested using measurements of the properties of local clouds, which are found to show remarkable agreement with the simple equilibrium model. The model also naturally predicts the star formation relation observed on cloud scales and at the same time provides a mapping between this relation and the closer-to-linear molecular star formation relation measured on larger scales in galaxies. The key is that pressure regulates not only the molecular content of the ISM but also the cloud surface density. I provide a straightforward prescription for the pressure regulation of star formation that can be directly implemented in numerical models. Predictions for the dense gas fraction and star formation efficiency measured on large-scales within galaxies are also presented, establishing the basis for a new picture of star formation regulated by galactic environment.« less
NASA Astrophysics Data System (ADS)
Ermakov, Ilya; Crucifix, Michel; Munhoven, Guy
2013-04-01
Complex climate models require high computational burden. However, computational limitations may be avoided by using emulators. In this work we present several approaches for dynamical emulation (also called metamodelling) of the Multi-Box Model (MBM) coupled to the Model of Early Diagenesis in the Upper Sediment A (MEDUSA) that simulates the carbon cycle of the ocean and atmosphere [1]. We consider two experiments performed on the MBM-MEDUSA that explore the Basin-to-Shelf Transfer (BST) dynamics. In both experiments the sea level is varied according to a paleo sea level reconstruction. Such experiments are interesting because the BST is an important cause of the CO2 variation and the dynamics is potentially nonlinear. The output that we are interested in is the variation of the carbon dioxide partial pressure in the atmosphere over the Pleistocene. The first experiment considers that the BST is fixed constant during the simulation. In the second experiment the BST is interactively adjusted according to the sea level, since the sea level is the primary control of the growth and decay of coral reefs and other shelf carbon reservoirs. The main aim of the present contribution is to create a metamodel of the MBM-MEDUSA using the Dynamic Emulation Modelling methodology [2] and compare the results obtained using linear and non-linear methods. The first step in the emulation methodology used in this work is to identify the structure of the metamodel. In order to select an optimal approach for emulation we compare the results of identification obtained by the simple linear and more complex nonlinear models. In order to identify the metamodel in the first experiment the simple linear regression and the least-squares method is sufficient to obtain a 99,9% fit between the temporal outputs of the model and the metamodel. For the second experiment the MBM's output is highly nonlinear. In this case we apply nonlinear models, such as, NARX, Hammerstein model, and an 'ad-hoc' switching model. After the identification we perform the parameter mapping using spline interpolation and validate the emulator on a new set of parameters. References: [1] G. Munhoven, "Glacial-interglacial rain ratio changes: Implications for atmospheric CO2 and ocean-sediment interaction," Deep-Sea Res Pt II, vol. 54, pp. 722-746, 2007. [2] A. Castelletti et al., "A general framework for Dynamic Emulation Modelling in environmental problems," Environ Modell Softw, vol. 34, pp. 5-18, 2012.
Barzi, Federica; Jones, Graham R D; Hughes, Jaquelyne T; Lawton, Paul D; Hoy, Wendy; O'Dea, Kerin; Jerums, George; MacIsaac, Richard J; Cass, Alan; Maple-Brown, Louise J
2018-03-01
Being able to estimate kidney decline accurately is particularly important in Indigenous Australians, a population at increased risk of developing chronic kidney disease and end stage kidney disease. The aim of this analysis was to explore the trend of decline in estimated glomerular filtration rate (eGFR) over a four year period using multiple local creatinine measures, compared with estimates derived using centrally-measured enzymatic creatinine and with estimates derived using only two local measures. The eGFR study comprised a cohort of over 600 Aboriginal Australian participants recruited from over twenty sites in urban, regional and remote Australia across five strata of health, diabetes and kidney function. Trajectories of eGFR were explored on 385 participants with at least three local creatinine records using graphical methods that compared the linear trends fitted using linear mixed models with non-linear trends fitted using fractional polynomial equations. Temporal changes of local creatinine were also characterized using group-based modelling. Analyses were stratified by eGFR (<60; 60-89; 90-119 and ≥120ml/min/1.73m 2 ) and albuminuria categories (<3mg/mmol; 3-30mg/mmol; >30mg/mmol). Mean age of the participants was 48years, 64% were female and the median follow-up was 3years. Decline of eGFR was accurately estimated using simple linear regression models and locally measured creatinine was as good as centrally measured creatinine at predicting kidney decline in people with an eGFR<60 and an eGFR 60-90ml/min/1.73m 2 with albuminuria. Analyses showed that one baseline and one follow-up locally measured creatinine may be sufficient to estimate short term (up to four years) kidney function decline. The greatest yearly decline was estimated in those with eGFR 60-90 and macro-albuminuria: -6.21 (-8.20, -4.23) ml/min/1.73m 2 . Short term estimates of kidney function decline can be reliably derived using an easy to implement and simple to interpret linear mixed effect model. Locally measured creatinine did not differ to centrally measured creatinine, thus is an accurate cost-efficient and timely means to monitoring kidney function progression. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.
Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas
2016-01-01
In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer.
NASA Astrophysics Data System (ADS)
Brown, R. A.
2005-08-01
This paper is adapted from a presentation at the session of the European Geophysical Society meeting in 2002 honouring Joost Businger. It documents the interaction of the non-linear planetary boundary-layer (PBL) model (UW-PBL) and satellite remote sensing of marine surface winds from verification and calibration studies for the sensor model function to the current state of verification of the model by satellite data. It is also a personal history where Joost Businger had seminal input to this research at several critical junctures. The first scatterometer in space was on SeaSat in 1978, while currently in orbit there are the QuikSCAT and ERS-2 scatterometers and the WindSat radiometer. The volume and detail of data from the scatterometers during the past decade are unprecedented, though the value of these data depends on a careful interpretation of the PBL dynamics. The model functions (algorithms) that relate surface wind to sensor signal have evolved from straight empirical correlation with simple surface-layer 10-m winds to satellite sensor model functions for surface pressure fields. A surface stress model function is also available. The validation data for the satellite model functions depended crucially on the PBL solution. The non-linear solution for the flow of fluid in the boundary layer of a rotating coordinate system was completed in 1969. The implications for traditional ways of measuring and modelling the PBL were huge and continue to this day. Unfortunately, this solution replaced an elegant one by Ekman with a stability/finite perturbation equilibrium solution. Consequently, there has been great reluctance to accept this solution. The verification of model predictions has been obtained from the satellite data.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M
2017-04-01
A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P; Oken, Barry S
2011-10-01
To determine (1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and (2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings 2 weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Time domain, especially mean R-R interval (RRI), frequency domain and, among non-linear parameters - Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Initial conditions for accurate N-body simulations of massive neutrino cosmologies
NASA Astrophysics Data System (ADS)
Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.
2017-04-01
The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.
Modeling the effect of orientation on the shock response of a damageable composite material
NASA Astrophysics Data System (ADS)
Lukyanov, Alexander A.
2012-10-01
A carbon fiber-epoxy composite (CFEC) shock response in the through thickness orientation and in one of the fiber directions is significantly different. The hydrostatic pressure inside anisotropic materials depends on deviatoric strain components as well as volumetric strain. Non-linear effects, such as shock effects, can be incorporated through the volumetric straining in the material. Thus, a new basis is required to couple the anisotropic material stiffness and strength with anisotropic shock effects, associated energy dependence, and damage softening process. This article presents these constitutive equations for shock wave modeling of a damageable carbon fiber-epoxy composite. Modeling the effect of fiber orientation on the shock response of a CFEC has been performed using a generalized decomposition of the stress tensor [A. A. Lukyanov, Int. J. Plast. 24, 140 (2008)] and Mie-Grüneisen's extrapolation of high-pressure shock Hugoniot states to other thermodynamics states for shocked CFEC materials. The three-wave structure (non-linear anisotropic, fracture, and isotropic elastic waves) that accompanies damage softening process is also proposed in this work for describing CFEC behavior under shock loading which allows to remove any discontinuities observed in the linear case for relation between shock velocities and particle velocities [A. A. Lukyanov, Eur. Phys. J. B 74, 35 (2010)]. Different Hugoniot stress levels are obtained when the material is impacted in different directions; their good agreement with the experiment demonstrates that the anisotropic equation of state, strength, and damage model are adequate for the simulation of shock wave propagation within damageable CFEC material. Remarkably, in the through thickness orientation, the material behaves similar to a simple polymer whereas in the fiber direction, the proposed in this paper model explains an initial ramp, before at sufficiently high stresses, and a much faster rising shock above it. The numerical results for shock wave modeling using proposed constitutive equations are presented, discussed, and future studies are outlined.
Non-Linear Dynamics of Saturn's Rings
NASA Astrophysics Data System (ADS)
Esposito, L. W.
2015-12-01
Non-linear processes can explain why Saturn's rings are so active and dynamic. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average: 2-10x is possible. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like 'straw' that can explain the halo structure and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; Surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km); We propose 'straw', as observed ny Cassini cameras. Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing. Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn's rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. This confirms the triple architecture of ring particles: a broad size distribution of particles; these aggregate into temporary rubble piles; coated by a regolith of dust. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn's rings a chaotic non-linear driven system?
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Beyond δ: Tailoring marked statistics to reveal modified gravity
NASA Astrophysics Data System (ADS)
Valogiannis, Georgios; Bean, Rachel
2018-01-01
Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.
Strong second harmonic generation in two-dimensional ferroelectric IV-monochalcogenides
NASA Astrophysics Data System (ADS)
Panday, Suman Raj; Fregoso, Benjamin M.
2017-11-01
The two-dimensional ferroelectrics GeS, GeSe, SnS and SnSe are expected to have large spontaneous in-plane electric polarization and enhanced shift-current response. Using density functional methods, we show that these materials also exhibit the largest effective second harmonic generation reported so far. It can reach magnitudes up to 10~nm~V-1 which is about an order of magnitude larger than that of prototypical GaAs. To rationalize this result we model the optical response with a simple one-dimensional two-band model along the spontaneous polarization direction. Within this model the second-harmonic generation tensor is proportional to the shift-current response tensor. The large shift current and second harmonic responses of GeS, GeSe, SnS and SnSe make them promising non-linear materials for optoelectronic applications.
Prediction on carbon dioxide emissions based on fuzzy rules
NASA Astrophysics Data System (ADS)
Pauzi, Herrini; Abdullah, Lazim
2014-06-01
There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.
NASA Astrophysics Data System (ADS)
Perez, R. J.; Shevalier, M.; Hutcheon, I.
2004-05-01
Gas solubility is of considerable interest, not only for the theoretical understanding of vapor-liquid equilibria, but also due to extensive applications in combined geochemical, engineering, and environmental problems, such as greenhouse gas sequestration. Reliable models for gas solubility calculations in salt waters and hydrocarbons are also valuable when evaluating fluid inclusions saturated with gas components. We have modeled the solubility of methane, ethane, hydrogen, carbon dioxide, hydrogen sulfide, and five other gases in a water-brine-hydrocarbon system by solving a non-linear system of equations composed by modified Henry's Law Constants (HLC), gas fugacities, and assuming binary mixtures. HLCs are a function of pressure, temperature, brine salinity, and hydrocarbon density. Experimental data of vapor pressures and mutual solubilities of binary mixtures provide the basis for the calibration of the proposed model. It is demonstrated that, by using the Setchenow equation, only a relatively simple modification of the pure water model is required to assess the solubility of gases in brine solutions. Henry's Law constants for gases in hydrocarbons are derived using regular solution theory and Ostwald coefficients available from the literature. We present a set of two-parameter polynomial expressions, which allow simple computation and formulation of the model. Our calculations show that solubility predictions using modified HLCs are acceptable within 0 to 250 C, 1 to 150 bars, salinity up to 5 molar, and gas concentrations up to 4 molar. Our model is currently being used in the IEA Weyburn CO2 monitoring and storage project.
NASA Astrophysics Data System (ADS)
Birkel, Christian; Broder, Tanja; Biester, Harald
2017-04-01
Peat soils act as important carbon sinks, but they also release large amounts of dissolved organic carbon (DOC) to the aquatic system. The DOC export is strongly tied to the export of soluble heavy metals. The accumulation of potentially toxic substances due to anthropogenic activities, and their natural export from peat soils to the aquatic system is an important health and environmental issue. However, limited knowledge exists as to how much of these substances are mobilized, how they are mobilized in terms of flow pathways and under which hydrometeorological conditions. In this study, we report from a combined experimental and modelling effort to provide greater process understanding from a small, lead (Pb) and arsenic (As) contaminated upland peat catchment in northwestern Germany. We developed a minimally parameterized, but process-based, coupled hydrology-biogeochemistry model applied to simulate detailed hydrometric and biogeochemical data. The model was based on an initial data mining analysis, in combination with regression relationships of discharge, DOC and element export. We assessed the internal model DOC-processing based on stream-DOC hysteresis patterns and 3-hourly time step groundwater level and soil DOC data (not used for calibration as an independent model test) for two consecutive summer periods in 2013 and 2014. We found that Pb and As mobilization can be efficiently predicted from DOC transport alone, but Pb showed a significant non-linear relationship with DOC, while As was linearly related to DOC. The relatively parsimonious model (nine calibrated parameters in total) showed the importance of non-linear and rapid near-surface runoff-generation mechanisms that caused around 60% of simulated DOC load. The total load was high even though these pathways were only activated during storm events on average 30% of the monitoring time - as also shown by the experimental data. Overall, the drier period 2013 resulted in increased nonlinearity, but exported less DOC (115 kg C ha-1 yr-1 ± 11 kg C ha-1 yr-1) compared to the equivalent but wetter period in 2014 (189 kg C ha-1 yr-1 ± 38 kg C ha-1 yr-1). The exceedance of a critical water table threshold (-10 cm) triggered a rapid near-surface runoff response with associated higher DOC transport connecting all available DOC pools, and with subsequent dilution. We conclude that the combination of detailed experimental work with relatively simple, coupled hydrology-biogeochemistry models allowed not only the model to be internally constrained, but also provided important insight into how DOC and tightly coupled heavy metals are mobilized.
Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities
NASA Astrophysics Data System (ADS)
Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred
2012-07-01
The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in the frame of an ESA TRP study [1]. A bread-board including typical non-linearities has been designed, manufactured and tested through a typical spacecraft dynamic test campaign. The study has demonstrate the capabilities to perform non-linear dynamic test predictions on a flight representative spacecraft, the good correlation of test results with respect to Finite Elements Model (FEM) prediction and the possibility to identify modal behaviour and to characterize non-linearities characteristics from test results. As a synthesis for this study, overall guidelines have been derived on the mechanical verification process to improve level of expertise on tests involving spacecraft including non-linearity.
Essa, Khalid S
2014-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
Connectotyping: Model Based Fingerprinting of the Functional Connectome
Miranda-Dominguez, Oscar; Mills, Brian D.; Carpenter, Samuel D.; Grant, Kathleen A.; Kroenke, Christopher D.; Nigg, Joel T.; Fair, Damien A.
2014-01-01
A better characterization of how an individual’s brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called “connectotype”, or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model’s ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach. PMID:25386919
A Quasi-Steady Lifting Line Theory for Insect-Like Hovering Flight
Nabawy, Mostafa R. A.; Crowthe, William J.
2015-01-01
A novel lifting line formulation is presented for the quasi-steady aerodynamic evaluation of insect-like wings in hovering flight. The approach allows accurate estimation of aerodynamic forces from geometry and kinematic information alone and provides for the first time quantitative information on the relative contribution of induced and profile drag associated with lift production for insect-like wings in hover. The main adaptation to the existing lifting line theory is the use of an equivalent angle of attack, which enables capture of the steady non-linear aerodynamics at high angles of attack. A simple methodology to include non-ideal induced effects due to wake periodicity and effective actuator disc area within the lifting line theory is included in the model. Low Reynolds number effects as well as the edge velocity correction required to account for different wing planform shapes are incorporated through appropriate modification of the wing section lift curve slope. The model has been successfully validated against measurements from revolving wing experiments and high order computational fluid dynamics simulations. Model predicted mean lift to weight ratio results have an average error of 4% compared to values from computational fluid dynamics for eight different insect cases. Application of an unmodified linear lifting line approach leads on average to a 60% overestimation in the mean lift force required for weight support, with most of the discrepancy due to use of linear aerodynamics. It is shown that on average for the eight insects considered, the induced drag contributes 22% of the total drag based on the mean cycle values and 29% of the total drag based on the mid half-stroke values. PMID:26252657
Non-Abelian Bremsstrahlung and Azimuthal Asymmetries in High Energy p+A Reactions
Gyulassy, Miklos; Vitev, Ivan Mateev; Levai, Peter; ...
2014-09-25
Here we apply the GLV reaction operator solution to the Vitev-Gunion-Bertsch (VGB) boundary conditions to compute the all-order in nuclear opacity non-abelian gluon bremsstrahlung of event- by-event uctuating beam jets in nuclear collisions. We evaluate analytically azimuthal Fourier moments of single gluon, vmore » $$M\\atop{n}$$ {1}, and even number 2ℓ gluon, v$$M\\atop{n}$$ {2ℓ} inclusive distributions in high energy p+A reactions as a function of harmonic $n$, target recoil cluster number, $M$, and gluon number, 2ℓ, at RHIC and LHC. Multiple resolved clusters of recoiling target beam jets together with the projectile beam jet form Color Scintillation Antenna (CSA) arrays that lead to character- istic boost non-invariant trapezoidal rapidity distributions in asymmetric B+A nuclear collisions. The scaling of intrinsically azimuthally anisotropic and long range in η nature of the non-Abelian bremsstrahlung leads to v n moments that are similar to results from hydrodynamic models, but due entirely to non-Abelian wave interference phenomena sourced by the fluctuating CSA. Our analytic non-flow solutions are similar to recent numerical saturation model predictions but differ by predicting a simple power-law hierarchy of both even and odd v n without invoking k T factorization. A test of CSA mechanism is the predicted nearly linear η rapidity dependence of the v n(k Tη). Non- Abelian beam jet bremsstrahlung may thus provide a simple analytic solution to Beam Energy Scan (BES) puzzle of the near $$\\sqrt{s}$$ independence of v n(pT) moments observed down to 10 AGeV where large-x valence quark beam jets dominate inelastic dynamics. Recoil bremsstrahlung from multiple independent CSA clusters could also provide a partial explanation for the unexpected similarity of v n in p(D) + A and non-central A + A at same dN=dη multiplicity as observed at RHIC and LHC.« less
Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Brown, R. L.
1978-01-01
Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.
Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame
NASA Astrophysics Data System (ADS)
Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.
2013-12-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.
Flexible Modes Control Using Sliding Mode Observers: Application to Ares I
NASA Technical Reports Server (NTRS)
Shtessel, Yuri B.; Hall, Charles E.; Baev, Simon; Orr, Jeb S.
2010-01-01
The launch vehicle dynamics affected by bending and sloshing modes are considered. Attitude measurement data that are corrupted by flexible modes could yield instability of the vehicle dynamics. Flexible body and sloshing modes are reconstructed by sliding mode observers. The resultant estimates are used to remove the undesirable dynamics from the measurements, and the direct effects of sloshing and bending modes on the launch vehicle are compensated by means of a controller that is designed without taking the bending and sloshing modes into account. A linearized mathematical model of Ares I launch vehicle was derived based on FRACTAL, a linear model developed by NASA/MSFC. The compensated vehicle dynamics with a simple PID controller were studied for the launch vehicle model that included two bending modes, two slosh modes and actuator dynamics. A simulation study demonstrated stable and accurate performance of the flight control system with the augmented simple PID controller without the use of traditional linear bending filters.
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
Amplification of perpendicular and parallel magnetic fields by cosmic ray currents
NASA Astrophysics Data System (ADS)
Matthews, J. H.; Bell, A. R.; Blundell, K. M.; Araudo, A. T.
2017-08-01
Cosmic ray (CR) currents through magnetized plasma drive strong instabilities producing amplification of the magnetic field. This amplification helps explain the CR energy spectrum as well as observations of supernova remnants and radio galaxy hotspots. Using magnetohydrodynamic simulations, we study the behaviour of the non-resonant hybrid (NRH) instability (also known as the Bell instability) in the case of CR currents perpendicular and parallel to the initial magnetic field. We demonstrate that extending simulations of the perpendicular case to 3D reveals a different character to the turbulence from that observed in 2D. Despite these differences, in 3D the perpendicular NRH instability still grows exponentially far into the non-linear regime with a similar growth rate to both the 2D perpendicular and 3D parallel situations. We introduce some simple analytical models to elucidate the physical behaviour, using them to demonstrate that the transition to the non-linear regime is governed by the growth of thermal pressure inside dense filaments at the edges of the expanding loops. We discuss our results in the context of supernova remnants and jets in radio galaxies. Our work shows that the NRH instability can amplify magnetic fields to many times their initial value in parallel and perpendicular shocks.
Engineering double-well potentials with variable-width annular Josephson tunnel junctions
NASA Astrophysics Data System (ADS)
Monaco, Roberto
2016-11-01
Long Josephson tunnel junctions are non-linear transmission lines that allow propagation of current vortices (fluxons) and electromagnetic waves and are used in various applications within superconductive electronics. Recently, the Josephson vortex has been proposed as a new superconducting qubit. We describe a simple method to create a double-well potential for an individual fluxon trapped in a long elliptic annular Josephson tunnel junction characterized by an intrinsic non-uniform width. The distance between the potential wells and the height of the inter-well potential barrier are controlled by the strength of an in-plane magnetic field. The manipulation of the vortex states can be achieved by applying a proper current ramp across the junction. The read-out of the state is accomplished by measuring the vortex depinning current in a small magnetic field. An accurate one-dimensional sine-Gordon model for this strongly non-linear system is presented, from which we calculate the position-dependent fluxon rest-mass, its Hamiltonian density and the corresponding trajectories in the phase space. We examine the dependence of the potential properties on the annulus eccentricity and its electrical parameters and address the requirements for observing quantum-mechanical effects, as discrete energy levels and tunneling, in this two-state system.
Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool
NASA Astrophysics Data System (ADS)
Chakraborty, Monisha; Ghosh, Dipak
2017-12-01
Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.
Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool
NASA Astrophysics Data System (ADS)
Chakraborty, Monisha; Ghosh, Dipak
2018-04-01
Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.
Linear and non-linear perturbations in dark energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escamilla-Rivera, Celia; Casarini, Luciano; Fabris, Júlio C.
2016-11-01
In this work we discuss observational aspects of three time-dependent parameterisations of the dark energy equation of state w ( z ). In order to determine the dynamics associated with these models, we calculate their background evolution and perturbations in a scalar field representation. After performing a complete treatment of linear perturbations, we also show that the non-linear contribution of the selected w ( z ) parameterisations to the matter power spectra is almost the same for all scales, with no significant difference from the predictions of the standard ΛCDM model.
Energetics of slope flows: linear and weakly nonlinear solutions of the extended Prandtl model
NASA Astrophysics Data System (ADS)
Güttler, Ivan; Marinović, Ivana; Večenaj, Željko; Grisogono, Branko
2016-07-01
The Prandtl model succinctly combines the 1D stationary boundary-layer dynamics and thermodynamics of simple anabatic and katabatic flows over uniformly inclined surfaces. It assumes a balance between the along-the-slope buoyancy component and adiabatic warming/cooling, and the turbulent mixing of momentum and heat. In this study, energetics of the Prandtl model is addressed in terms of the total energy (TE) concept. Furthermore, since the authors recently developed a weakly nonlinear version of the Prandtl model, the TE approach is also exercised on this extended model version, which includes an additional nonlinear term in the thermodynamic equation. Hence, interplay among diffusion, dissipation and temperature-wind interaction of the mean slope flow is further explored. The TE of the nonlinear Prandtl model is assessed in an ensemble of solutions where the Prandtl number, the slope angle and the nonlinearity parameter are perturbed. It is shown that nonlinear effects have the lowest impact on variability in the ensemble of solutions of the weakly nonlinear Prandtl model when compared to the other two governing parameters. The general behavior of the nonlinear solution is similar to the linear solution, except that the maximum of the along-the-slope wind speed in the nonlinear solution reduces for larger slopes. Also, the dominance of PE near the sloped surface, and the elevated maximum of KE in the linear and nonlinear energetics of the extended Prandtl model are found in the PASTEX-94 measurements. The corresponding level where KE>PE most likely marks the bottom of the sublayer subject to shear-driven instabilities. Finally, possible limitations of the weakly nonlinear solutions of the extended Prandtl model are raised. In linear solutions, the local storage of TE term is zero, reflecting the stationarity of solutions by definition. However, in nonlinear solutions, the diffusion, dissipation and interaction terms (where the height of the maximum interaction is proportional to the height of the low-level jet by the factor ≈4/9) do not balance and the local storage of TE attains non-zero values. In order to examine the issue of non-stationarity, the inclusion of velocity-pressure covariance in the momentum equation is suggested for future development of the extended Prandtl model.
NASA Astrophysics Data System (ADS)
Jordan, Jared Williams; Dvorak, Steven L.; Sternberg, Ben K.
2010-10-01
In this paper, we develop a technique for designing high-power, non-linear, transmitting rod-core antennas by using simple modified scale factors rather than running labor-intensive numerical models. By using modified scale factors, a designer can predict changes in magnetic moment, inductance, core series loss resistance, etc. We define modified scale factors as the case when all physical dimensions of the rod antenna are scaled by p, except for the cross-sectional area of the individual wires or strips that are used to construct the core. This allows one to make measurements on a scaled-down version of the rod antenna using the same core material that will be used in the final antenna design. The modified scale factors were derived from prolate spheroidal analytical expressions for a finite-length rod antenna and were verified with experimental results. The modified scaling factors can only be used if the magnetic flux densities within the two scaled cores are the same. With the magnetic flux density constant, the two scaled cores will operate with the same complex permeability, thus changing the non-linear problem to a quasi-linear problem. We also demonstrate that by holding the number of turns times the drive current constant, while changing the number of turns, the inductance and core series loss resistance change by the number of turns squared. Experimental measurements were made on rod cores made from varying diameters of black oxide, low carbon steel wires and different widths of Metglas foil. Furthermore, we demonstrate that the modified scale factors work even in the presence of eddy currents within the core material.
NASA Astrophysics Data System (ADS)
Vespe, Francesco; Benedetto, Catia
2013-04-01
The huge amount of GPS Radio Occultation (RO) observations currently available thanks to space mission like COSMIC, CHAMP, GRACE, TERRASAR-X etc., have greatly encouraged the research of new algorithms suitable to extract humidity, temperature and pressure profiles of the atmosphere in a more and more precise way. For what concern the humidity profiles in these last years two different approaches have been widely proved and applied: the "Simple" and the 1DVAR methods. The Simple methods essentially determine dry refractivity profiles from temperature analysis profiles and hydrostatic equation. Then the dry refractivity is subtracted from RO refractivity to achieve the wet component. Finally from the wet refractivity is achieved humidity. The 1DVAR approach combines RO observations with profiles given by the background models with both the terms weighted with the inverse of covariance matrix. The advantage of "Simple" methods is that they are not affected by bias due to the background models. We have proposed in the past the BPV approach to retrieve humidity. Our approach can be classified among the "Simple" methods. The BPV approach works with dry atmospheric CIRA-Q models which depend on latitude, DoY and height. The dry CIRA-Q refractivity profile is selected estimating the involved parameters in a non linear least square fashion achieved by fitting RO observed bending angles through the stratosphere. The BPV as well as all the other "Simple" methods, has as drawback the unphysical occurrence of negative "humidity". Thus we propose to apply a modulated weighting of the fit residuals just to minimize the effects of this inconvenient. After a proper tuning of the approach, we plan to present the results of the validation.
Design and analysis of linear cascade DNA hybridization chain reactions using DNA hairpins
NASA Astrophysics Data System (ADS)
Bui, Hieu; Garg, Sudhanshu; Miao, Vincent; Song, Tianqi; Mokhtar, Reem; Reif, John
2017-01-01
DNA self-assembly has been employed non-conventionally to construct nanoscale structures and dynamic nanoscale machines. The technique of hybridization chain reactions by triggered self-assembly has been shown to form various interesting nanoscale structures ranging from simple linear DNA oligomers to dendritic DNA structures. Inspired by earlier triggered self-assembly works, we present a system for controlled self-assembly of linear cascade DNA hybridization chain reactions using nine distinct DNA hairpins. NUPACK is employed to assist in designing DNA sequences and Matlab has been used to simulate DNA hairpin interactions. Gel electrophoresis and ensemble fluorescence reaction kinetics data indicate strong evidence of linear cascade DNA hybridization chain reactions. The half-time completion of the proposed linear cascade reactions indicates a linear dependency on the number of hairpins.
Monte Carlo simulations of lattice models for single polymer systems
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping
2014-10-01
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N ˜ O(10^4). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and sqrt{10}, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
Multi-temperature state-dependent equivalent circuit discharge model for lithium-sulfur batteries
NASA Astrophysics Data System (ADS)
Propp, Karsten; Marinescu, Monica; Auger, Daniel J.; O'Neill, Laura; Fotouhi, Abbas; Somasundaram, Karthik; Offer, Gregory J.; Minton, Geraint; Longo, Stefano; Wild, Mark; Knap, Vaclav
2016-10-01
Lithium-sulfur (Li-S) batteries are described extensively in the literature, but existing computational models aimed at scientific understanding are too complex for use in applications such as battery management. Computationally simple models are vital for exploitation. This paper proposes a non-linear state-of-charge dependent Li-S equivalent circuit network (ECN) model for a Li-S cell under discharge. Li-S batteries are fundamentally different to Li-ion batteries, and require chemistry-specific models. A new Li-S model is obtained using a 'behavioural' interpretation of the ECN model; as Li-S exhibits a 'steep' open-circuit voltage (OCV) profile at high states-of-charge, identification methods are designed to take into account OCV changes during current pulses. The prediction-error minimization technique is used. The model is parameterized from laboratory experiments using a mixed-size current pulse profile at four temperatures from 10 °C to 50 °C, giving linearized ECN parameters for a range of states-of-charge, currents and temperatures. These are used to create a nonlinear polynomial-based battery model suitable for use in a battery management system. When the model is used to predict the behaviour of a validation data set representing an automotive NEDC driving cycle, the terminal voltage predictions are judged accurate with a root mean square error of 32 mV.
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.
An Application to the Prediction of LOD Change Based on General Regression Neural Network
NASA Astrophysics Data System (ADS)
Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.
2011-07-01
Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.
Some simple solutions of Schrödinger's equation for a free particle or for an oscillator
NASA Astrophysics Data System (ADS)
Andrews, Mark
2018-05-01
For a non-relativistic free particle, we show that the evolution of some simple initial wave functions made up of linear segments can be expressed in terms of Fresnel integrals. Examples include the square wave function and the triangular wave function. The method is then extended to wave functions made from quadratic elements. The evolution of all these initial wave functions can also be found for the harmonic oscillator by a transformation of the free evolutions.
An Application of the H-Function to Curve-Fitting and Density Estimation.
1983-12-01
equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability
Wave kinetics of random fibre lasers
Churkin, D V.; Kolokolov, I V.; Podivilov, E V.; Vatnik, I D.; Nikulin, M A.; Vergeles, S S.; Terekhov, I S.; Lebedev, V V.; Falkovich, G.; Babin, S A.; Turitsyn, S K.
2015-01-01
Traditional wave kinetics describes the slow evolution of systems with many degrees of freedom to equilibrium via numerous weak non-linear interactions and fails for very important class of dissipative (active) optical systems with cyclic gain and losses, such as lasers with non-linear intracavity dynamics. Here we introduce a conceptually new class of cyclic wave systems, characterized by non-uniform double-scale dynamics with strong periodic changes of the energy spectrum and slow evolution from cycle to cycle to a statistically steady state. Taking a practically important example—random fibre laser—we show that a model describing such a system is close to integrable non-linear Schrödinger equation and needs a new formalism of wave kinetics, developed here. We derive a non-linear kinetic theory of the laser spectrum, generalizing the seminal linear model of Schawlow and Townes. Experimental results agree with our theory. The work has implications for describing kinetics of cyclical systems beyond photonics. PMID:25645177
NASA Astrophysics Data System (ADS)
Ren, Diandong; Karoly, David J.
2008-03-01
Observations from seven Central Asian glaciers (35-55°N; 70-95°E) are used, together with regional temperature data, to infer uncertain parameters for a simple linear model of the glacier length variations. The glacier model is based on first order glacier dynamics and requires the knowledge of reference states of forcing and glacier perturbation magnitude. An adjoint-based variational method is used to optimally determine the glacier reference states in 1900 and the uncertain glacier model parameters. The simple glacier model is then used to estimate the glacier length variations until 2060 using regional temperature projections from an ensemble of climate model simulations for a future climate change scenario (SRES A2). For the period 2000-2060, all glaciers are projected to experience substantial further shrinkage, especially those with gentle slopes (e.g., Glacier Chogo Lungma retreats ˜4 km). Although nearly one-third of the year 2000 length will be reduced for some small glaciers, the existence of the glaciers studied here is not threatened by year 2060. The differences between the individual glacier responses are large. No straightforward relationship is found between glacier size and the projected fractional change of its length.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Sajan; Petty, Clayton W.; Krafcik, Karen Lee
Electrostatic modes of atomic force microscopy have shown to be non-destructive and relatively simple methods for imaging conductors embedded in insulating polymers. Here we use electrostatic force microscopy to image the dispersion of carbon nanotubes in a latex-based conductive composite, which brings forth features not observed in previously studied systems employing linear polymer films. A fixed-potential model of the probe-nanotube electrostatics is presented which in principle gives access to the conductive nanoparticle's depth and radius, and the polymer film dielectric constant. Comparing this model to the data results in nanotube depths that appear to be slightly above the film–air interface.more » Furthermore, this result suggests that water-mediated charge build-up at the film–air interface may be the source of electrostatic phase contrast in ambient conditions.« less
Scaling laws and fluctuations in the statistics of word frequencies
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Altmann, Eduardo G.
2014-11-01
In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.
Combined tension and bending testing of tapered composite laminates
NASA Astrophysics Data System (ADS)
O'Brien, T. Kevin; Murri, Gretchen B.; Hagemeier, Rick; Rogers, Charles
1994-11-01
A simple beam element used at Bell Helicopter was incorporated in the Computational Mechanics Testbed (COMET) finite element code at the Langley Research Center (LaRC) to analyze the responce of tappered laminates typical of flexbeams in composite rotor hubs. This beam element incorporated the influence of membrane loads on the flexural response of the tapered laminate configurations modeled and tested in a combined axial tension and bending (ATB) hydraulic load frame designed and built at LaRC. The moments generated from the finite element model were used in a tapered laminated plate theory analysis to estimate axial stresses on the surface of the tapered laminates due to combined bending and tension loads. Surfaces strains were calculated and compared to surface strains measured using strain gages mounted along the laminate length. The strain distributions correlated reasonably well with the analysis. The analysis was then used to examine the surface strain distribution in a non-linear tapered laminate where a similarly good correlation was obtained. Results indicate that simple finite element beam models may be used to identify tapered laminate configurations best suited for simulating the response of a composite flexbeam in a full scale rotor hub.
Three-dimensional earthquake analysis of roller-compacted concrete dams
NASA Astrophysics Data System (ADS)
Kartal, M. E.
2012-07-01
Ground motion effect on a roller-compacted concrete (RCC) dams in the earthquake zone should be taken into account for the most critical conditions. This study presents three-dimensional earthquake response of a RCC dam considering geometrical non-linearity. Besides, material and connection non-linearity are also taken into consideration in the time-history analyses. Bilinear and multilinear kinematic hardening material models are utilized in the materially non-linear analyses for concrete and foundation rock respectively. The contraction joints inside the dam blocks and dam-foundation-reservoir interaction are modeled by the contact elements. The hydrostatic and hydrodynamic pressures of the reservoir water are modeled with the fluid finite elements based on the Lagrangian approach. The gravity and hydrostatic pressure effects are employed as initial condition before the strong ground motion. In the earthquake analyses, viscous dampers are defined in the finite element model to represent infinite boundary conditions. According to numerical solutions, horizontal displacements increase under hydrodynamic pressure. Besides, those also increase in the materially non-linear analyses of the dam. In addition, while the principle stress components by the hydrodynamic pressure effect the reservoir water, those decrease in the materially non-linear time-history analyses.
Non-linearities in Holocene floodplain sediment storage
NASA Astrophysics Data System (ADS)
Notebaert, Bastiaan; Nils, Broothaerts; Jean-François, Berger; Gert, Verstraeten
2013-04-01
Floodplain sediment storage is an important part of the sediment cascade model, buffering sediment delivery between hillslopes and oceans, which is hitherto not fully quantified in contrast to other global sediment budget components. Quantification and dating of floodplain sediment storage is data and financially demanding, limiting contemporary estimates for larger spatial units to simple linear extrapolations from a number of smaller catchments. In this paper we will present non-linearities in both space and time for floodplain sediment budgets in three different catchments. Holocene floodplain sediments of the Dijle catchment in the Belgian loess region, show a clear distinction between morphological stages: early Holocene peat accumulation, followed by mineral floodplain aggradation from the start of the agricultural period on. Contrary to previous assumptions, detailed dating of this morphological change at different shows an important non-linearity in geomorphologic changes of the floodplain, both between and within cross sections. A second example comes from the Pre-Alpine French Valdaine region, where non-linearities and complex system behavior exists between (temporal) patterns of soil erosion and floodplain sediment deposition. In this region Holocene floodplain deposition is characterized by different cut-and-fill phases. The quantification of these different phases shows a complicated image of increasing and decreasing floodplain sediment storage, which hampers the image of increasing sediment accumulation over time. Although fill stages may correspond with large quantities of deposited sediment and traditionally calculated sedimentation rates for such stages are high, they do not necessary correspond with a long-term net increase in floodplain deposition. A third example is based on the floodplain sediment storage in the Amblève catchment, located in the Belgian Ardennes uplands. Detailed floodplain sediment quantification for this catchments shows that a strong multifractality is present in the scaling relationship between sediment storage and catchment area, depending on geomorphic landscape properties. Extrapolation of data from one spatial scale to another inevitably leads to large errors: when only the data of the upper floodplains are considered, a regression analysis results in an overestimation of total floodplain deposition for the entire catchment of circa 115%. This example demonstrates multifractality and related non-linearity in scaling relationships, which influences extrapolations beyond the initial range of measurements. These different examples indicate how traditional extrapolation techniques and assumptions in sediment budget studies can be challenged by field data, further complicating our understanding of these systems. Although simplifications are often necessary when working on large spatial scale, such non-linearities may form challenges for a better understanding of system behavior.
NASA Astrophysics Data System (ADS)
Cisneros, Sophia
2013-04-01
We present a new, heuristic, two-parameter model for predicting the rotation curves of disc galaxies. The model is tested on (22) randomly chosen galaxies, represented in 35 data sets. This Lorentz Convolution [LC] model is derived from a non-linear, relativistic solution of a Kerr-type wave equation, where small changes in the photon's frequencies, resulting from the curved space time, are convolved into a sequence of Lorentz transformations. The LC model is parametrized with only the diffuse, luminous stellar and gaseous masses reported with each data set of observations used. The LC model predicts observed rotation curves across a wide range of disk galaxies. The LC model was constructed to occupy the same place in the explanation of rotation curves that Dark Matter does, so that a simple investigation of the relation between luminous and dark matter might be made, via by a parameter (a). We find the parameter (a) to demonstrate interesting structure. We compare the new model prediction to both the NFW model and MOND fits when available.
Gamma/x-ray linear pushbroom stereo for 3D cargo inspection
NASA Astrophysics Data System (ADS)
Zhu, Zhigang; Hu, Yu-Chi
2006-05-01
For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Comparison of heaving buoy and oscillating flap wave energy converters
NASA Astrophysics Data System (ADS)
Abu Bakar, Mohd Aftar; Green, David A.; Metcalfe, Andrew V.; Najafian, G.
2013-04-01
Waves offer an attractive source of renewable energy, with relatively low environmental impact, for communities reasonably close to the sea. Two types of simple wave energy converters (WEC), the heaving buoy WEC and the oscillating flap WEC, are studied. Both WECs are considered as simple energy converters because they can be modelled, to a first approximation, as single degree of freedom linear dynamic systems. In this study, we estimate the response of both WECs to typical wave inputs; wave height for the buoy and corresponding wave surge for the flap, using spectral methods. A nonlinear model of the oscillating flap WEC that includes the drag force, modelled by the Morison equation is also considered. The response to a surge input is estimated by discrete time simulation (DTS), using central difference approximations to derivatives. This is compared with the response of the linear model obtained by DTS and also validated using the spectral method. Bendat's nonlinear system identification (BNLSI) technique was used to analyze the nonlinear dynamic system since the spectral analysis was only suitable for linear dynamic system. The effects of including the nonlinear term are quantified.
Alteration in non-classicality of light on passing through a linear polarization beam splitter
NASA Astrophysics Data System (ADS)
Shukla, Namrata; Prakash, Ranjana
2016-06-01
We observe the polarization squeezing in the mixture of a two mode squeezed vacuum and a simple coherent light through a linear polarization beam splitter. Squeezed vacuum not being squeezed in polarization, generates polarization squeezed light when superposed with coherent light. All the three Stokes parameters of the light produced on the output port of polarization beam splitter are found to be squeezed and squeezing factor also depends upon the parameters of coherent light.
Application of General Regression Neural Network to the Prediction of LOD Change
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao
2012-01-01
Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.
Effect of lensing non-Gaussianity on the CMB power spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Antony; Pratten, Geraint, E-mail: antony@cosmologist.info, E-mail: geraint.pratten@gmail.com
2016-12-01
Observed CMB anisotropies are lensed, and the lensed power spectra can be calculated accurately assuming the lensing deflections are Gaussian. However, the lensing deflections are actually slightly non-Gaussian due to both non-linear large-scale structure growth and post-Born corrections. We calculate the leading correction to the lensed CMB power spectra from the non-Gaussianity, which is determined by the lensing bispectrum. Assuming no primordial non-Gaussianity, the lowest-order result gives ∼ 0.3% corrections to the BB and EE polarization spectra on small-scales. However we show that the effect on EE is reduced by about a factor of two by higher-order Gaussian lensing smoothing,more » rendering the total effect safely negligible for the foreseeable future. We give a simple analytic model for the signal expected from skewness of the large-scale lensing field; the effect is similar to a net demagnification and hence a small change in acoustic scale (and therefore out of phase with the dominant lensing smoothing that predominantly affects the peaks and troughs of the power spectrum).« less
NASA Astrophysics Data System (ADS)
Åkesson, Henning; Nisancioglu, Kerim H.; Giesen, Rianne H.; Morlighem, Mathieu
2016-04-01
Glacier and ice cap volume changes currently amount to half of the total cryospheric contribution to sea-level rise and are projected to remain substantial throughout the 21st century. To simulate glacier behavior on centennial and longer time scales, models rely on simplified dynamics and tunable parameters for processes not well understood. Model calibration is often done using present-day observations, even though the relationship between parameters and parametrized processes may be altered for significantly different glacier states. In this study, we simulate the Hardangerjøkulen ice cap in southern Norway since the mid-Holocene, through the Little Ice Age (LIA) and into the future. We run an ensemble for both calibration and transient experiments, using a two-dimensional ice flow model with mesh refinement. For the Holocene, we apply a simple mass balance forcing based on climate reconstructions. For the LIA until 1962, we use geomorphological evidence and measured outlet glacier positions to find a mass balance history, while we use direct mass balance measurements from 1963 until today. Given a linear climate forcing, we show that Hardangerøkulen grew from ice-free conditions in the mid-Holocene, to its maximum LIA extent in a highly non-linear fashion. We relate this to local bed topography and demonstrate that volume and area of some but not all outlet glaciers, as well as the entire ice cap, become decoupled for several centuries during our simulation of the late Holocene, before co-varying approaching the LIA. Our model is able to simulate most recorded ice cap and outlet glacier changes from the LIA until today. We show that present-day Hardangerøkulen is highly sensitive to mass balance changes, and estimate that the ice cap will melt completely by the year 2100.
NASA Astrophysics Data System (ADS)
Kumar, Devendra; Singh, Jagdev; Baleanu, Dumitru
2018-02-01
The mathematical model of breaking of non-linear dispersive water waves with memory effect is very important in mathematical physics. In the present article, we examine a novel fractional extension of the non-linear Fornberg-Whitham equation occurring in wave breaking. We consider the most recent theory of differentiation involving the non-singular kernel based on the extended Mittag-Leffler-type function to modify the Fornberg-Whitham equation. We examine the existence of the solution of the non-linear Fornberg-Whitham equation of fractional order. Further, we show the uniqueness of the solution. We obtain the numerical solution of the new arbitrary order model of the non-linear Fornberg-Whitham equation with the aid of the Laplace decomposition technique. The numerical outcomes are displayed in the form of graphs and tables. The results indicate that the Laplace decomposition algorithm is a very user-friendly and reliable scheme for handling such type of non-linear problems of fractional order.
NASA Astrophysics Data System (ADS)
Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.
2017-05-01
The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
A position-aware linear solid constitutive model for peridynamics
Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.
2015-11-06
A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less
A position-aware linear solid constitutive model for peridynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.
A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less
Response statistics of rotating shaft with non-linear elastic restoring forces by path integration
NASA Astrophysics Data System (ADS)
Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael
2017-07-01
Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xiang; Geva, Eitan
2016-06-28
In this paper, we test the accuracy of the linearized semiclassical (LSC) expression for the equilibrium Fermi’s golden rule rate constant for electronic transitions in the presence of non-Condon effects. We do so by performing a comparison with the exact quantum-mechanical result for a model where the donor and acceptor potential energy surfaces are parabolic and identical except for shifts in the equilibrium energy and geometry, and the coupling between them is linear in the nuclear coordinates. Since non-Condon effects may or may not give rise to conical intersections, both possibilities are examined by considering: (1) A modified Garg-Onuchic-Ambegaokar modelmore » for charge transfer in the condensed phase, where the donor-acceptor coupling is linear in the primary mode coordinate, and for which non-Condon effects do not give rise to a conical intersection; (2) the linear vibronic coupling model for electronic transitions in gas phase molecules, where non-Condon effects give rise to conical intersections. We also present a comprehensive comparison between the linearized semiclassical expression and a progression of more approximate expressions. The comparison is performed over a wide range of frictions and temperatures for model (1) and over a wide range of temperatures for model (2). The linearized semiclassical method is found to reproduce the exact quantum-mechanical result remarkably well for both models over the entire range of parameters under consideration. In contrast, more approximate expressions are observed to deviate considerably from the exact result in some regions of parameter space.« less
NASA Astrophysics Data System (ADS)
Sun, Jingliang; Liu, Chunsheng
2018-01-01
In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.
van den Boer, Cindy; Muller, Sara H; Vincent, Andrew D; Züchner, Klaus; van den Brekel, Michiel W M; Hilgers, Frans J M
2013-09-01
Breathing through a tracheostomy results in insufficient warming and humidification of inspired air. This loss of air-conditioning can be partially compensated for with the application of a heat and moisture exchanger (HME) over the tracheostomy. In vitro (International Organization for Standardization [ISO] standard 9360-2:2001) and in vivo measurements of the effects of an HME are complex and technically challenging. The aim of this study was to develop a simple method to measure the ex vivo HME performance comparable with previous in vitro and in vivo results. HMEs were weighed at the end of inspiration and at the end of expiration at different breathing volumes. Four HMEs (Atos Medical, Hörby, Sweden) with known in vivo humidity and in vitro water loss values were tested. The associations between weight change, volume, and absolute humidity were determined using both linear and non-linear mixed effects models. The rating between the 4 HMEs by weighing correlated with previous intra-tracheal measurements (R(2) = 0.98), and the ISO standard (R(2) = 0.77). Assessment of the weight change between end of inhalation and end of exhalation is a valid and simple method of measuring the water exchange performance of an HME.
Conservative Estimation of Whole-body Average SAR in Infant Model for 0.3-6GHz Far-Field Exposure
NASA Astrophysics Data System (ADS)
Hirata, Akimasa; Nagaya, Yoshio; Ito, Naoki; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi
From an anatomically-based Japanese model of three-year-old child with a resolution of 1 mm, we developed a nine-month Japanese infant with linear shrink. With these models, we calculated the whole-body average specific absorption rate (WBA-SAR) for plane-wave exposure from 0.1 to 6 GHz. A conservative estimate of the WBA-SAR was also investigated by using three kinds of simple-shaped models: cuboid, ellipsoid and spheroid, whose parameters were determined based on the above three-year-old child model. As a result, the cuboid and ellipsoid were found to provide an overestimate of the WBA-SAR compared to the realistic model, whereas the spheroid does an underestimate. Based on these findings for different body models, we have specified the incident power density required to produce WBA-SAR of 0.08 W/kg, which is the basic restriction for public exposure in the guidelines of International Commission on Non-Ionizing Radiation Protection.
NASA Technical Reports Server (NTRS)
Schuecker, Clara; Davila, Carlos G.; Rose, Cheryl A.
2010-01-01
Five models for matrix damage in fiber reinforced laminates are evaluated for matrix-dominated loading conditions under plane stress and are compared both qualitatively and quantitatively. The emphasis of this study is on a comparison of the response of embedded plies subjected to a homogeneous stress state. Three of the models are specifically designed for modeling the non-linear response due to distributed matrix cracking under homogeneous loading, and also account for non-linear (shear) behavior prior to the onset of cracking. The remaining two models are localized damage models intended for predicting local failure at stress concentrations. The modeling approaches of distributed vs. localized cracking as well as the different formulations of damage initiation and damage progression are compared and discussed.
Cer, Regina Z; Herrera-Galeano, J Enrique; Anderson, Joseph J; Bishop-Lilly, Kimberly A; Mokashi, Vishwesh P
2014-01-01
Understanding the biological roles of microRNAs (miRNAs) is a an active area of research that has produced a surge of publications in PubMed, particularly in cancer research. Along with this increasing interest, many open-source bioinformatics tools to identify existing and/or discover novel miRNAs in next-generation sequencing (NGS) reads become available. While miRNA identification and discovery tools are significantly improved, the development of miRNA differential expression analysis tools, especially in temporal studies, remains substantially challenging. Further, the installation of currently available software is non-trivial and steps of testing with example datasets, trying with one's own dataset, and interpreting the results require notable expertise and time. Subsequently, there is a strong need for a tool that allows scientists to normalize raw data, perform statistical analyses, and provide intuitive results without having to invest significant efforts. We have developed miRNA Temporal Analyzer (mirnaTA), a bioinformatics package to identify differentially expressed miRNAs in temporal studies. mirnaTA is written in Perl and R (Version 2.13.0 or later) and can be run across multiple platforms, such as Linux, Mac and Windows. In the current version, mirnaTA requires users to provide a simple, tab-delimited, matrix file containing miRNA name and count data from a minimum of two to a maximum of 20 time points and three replicates. To recalibrate data and remove technical variability, raw data is normalized using Normal Quantile Transformation (NQT), and linear regression model is used to locate any miRNAs which are differentially expressed in a linear pattern. Subsequently, remaining miRNAs which do not fit a linear model are further analyzed in two different non-linear methods 1) cumulative distribution function (CDF) or 2) analysis of variances (ANOVA). After both linear and non-linear analyses are completed, statistically significant miRNAs (P < 0.05) are plotted as heat maps using hierarchical cluster analysis and Euclidean distance matrix computation methods. mirnaTA is an open-source, bioinformatics tool to aid scientists in identifying differentially expressed miRNAs which could be further mined for biological significance. It is expected to provide researchers with a means of interpreting raw data to statistical summaries in a fast and intuitive manner.
NASA Astrophysics Data System (ADS)
Attal, M.; Hobley, D.; Cowie, P. A.; Whittaker, A. C.; Tucker, G. E.; Roberts, G. P.
2008-12-01
Prominent convexities in channel long profiles, or knickzones, are an expected feature of bedrock rivers responding to a change in the rate of base level fall driven by tectonic processes. In response to a change in relative uplift rate, the simple stream power model which is characterized by a slope exponent equal to unity predicts that knickzone retreat velocity is independent of uplift rate and that channel slope and uplift rate are linearly related along the reaches which have re-equilibrated with respect to the new uplift condition (i.e., downstream of the profile convexity). However, a threshold for erosion has been shown to introduce non- linearity between slope and uplift rate when associated with stochastic rainfall variability. We present field data regarding the height and retreat rates of knickzones in rivers upstream of active normal faults in the central Apennines, Italy, where excellent constraints exist on the temporal and spatial history of fault movement. The knickzones developed in response to an independently-constrained increase in fault throw rate 0.75 Ma. Channel characteristics and Shield stress values suggest that these rivers lie close to the detachment-limited end-member but the knickzone retreat velocity (calculated from the time since fault acceleration) has been found to scale systematically with the known fault throw rates, even after accounting for differences in drainage area. In addition, the relationship between measured channel slope and relative uplift rate is non-linear, suggesting that a threshold for erosion might be effective in this setting. We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to quantify the effect of such a threshold on river long profile development and knickzone retreat in response to tectonic perturbation. In particular, we investigate the evolutions of 3 Italian catchments of different size characterized by contrasted degree of tectonic perturbation, using physically realistic threshold values based on sediment grain-size measurements along the studied rivers. We show that the threshold alone cannot account for field observations of the size, position and retreat rate of profile convexities and that other factors neglected by the simple stream power law (e.g. role of sediments) have to be invoked to explain the discrepancy between field observations and modeled topographies.
NASA Astrophysics Data System (ADS)
Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra
2018-01-01
Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Treatment of systematic errors in land data assimilation systems
NASA Astrophysics Data System (ADS)
Crow, W. T.; Yilmaz, M.
2012-12-01
Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.
A squeezed light source operated under high vacuum
Wade, Andrew R.; Mansell, Georgia L.; Chua, Sheon S. Y.; Ward, Robert L.; Slagmolen, Bram J. J.; Shaddock, Daniel A.; McClelland, David E.
2015-01-01
Non-classical squeezed states of light are becoming increasingly important to a range of metrology and other quantum optics applications in cryptography, quantum computation and biophysics. Applications such as improving the sensitivity of advanced gravitational wave detectors and the development of space-based metrology and quantum networks will require robust deployable vacuum-compatible sources. To date non-linear photonics devices operated under high vacuum have been simple single pass systems, testing harmonic generation and the production of classically correlated photon pairs for space-based applications. Here we demonstrate the production under high-vacuum conditions of non-classical squeezed light with an observed 8.6 dB of quantum noise reduction down to 10 Hz. Demonstration of a resonant non-linear optical device, for the generation of squeezed light under vacuum, paves the way to fully exploit the advantages of in-vacuum operations, adapting this technology for deployment into new extreme environments. PMID:26657616
A squeezed light source operated under high vacuum
NASA Astrophysics Data System (ADS)
Wade, Andrew R.; Mansell, Georgia L.; Chua, Sheon S. Y.; Ward, Robert L.; Slagmolen, Bram J. J.; Shaddock, Daniel A.; McClelland, David E.
2015-12-01
Non-classical squeezed states of light are becoming increasingly important to a range of metrology and other quantum optics applications in cryptography, quantum computation and biophysics. Applications such as improving the sensitivity of advanced gravitational wave detectors and the development of space-based metrology and quantum networks will require robust deployable vacuum-compatible sources. To date non-linear photonics devices operated under high vacuum have been simple single pass systems, testing harmonic generation and the production of classically correlated photon pairs for space-based applications. Here we demonstrate the production under high-vacuum conditions of non-classical squeezed light with an observed 8.6 dB of quantum noise reduction down to 10 Hz. Demonstration of a resonant non-linear optical device, for the generation of squeezed light under vacuum, paves the way to fully exploit the advantages of in-vacuum operations, adapting this technology for deployment into new extreme environments.
NASA Astrophysics Data System (ADS)
Tang, T. F.; Chong, S. H.
2017-06-01
This paper presents a practical controller design method for ultra-precision positioning of pneumatic artificial muscle actuator stages. Pneumatic artificial muscle (PAM) actuators are safe to use and have numerous advantages which have brought these actuators to wide applications. However, PAM exhibits strong non-linear characteristics, and these limitations lead to low controllability and limit its application. In practice, the non-linear characteristics of PAM mechanism are difficult to be precisely modeled, and time consuming to model them accurately. The purpose of the present study is to clarify a practical controller design method that emphasizes a simple design procedure that does not acquire plants parameters modeling, and yet is able to demonstrate ultra-precision positioning performance for a PAM driven stage. The practical control approach adopts continuous motion nominal characteristic trajectory following (CM NCTF) control as the feedback controller. The constructed PAM driven stage is in low damping characteristic and causes severe residual vibration that deteriorates motion accuracy of the system. Therefore, the idea to increase the damping characteristic by having an acceleration feedback compensation to the plant has been proposed. The effectiveness of the proposed controller was verified experimentally and compared with a classical PI controller in point-to-point motion. The experiment results proved that the CM NCTF controller demonstrates better positioning performance in smaller motion error than the PI controller. Overall, the CM NCTF controller has successfully to reduce motion error to 3µm, which is 88.7% smaller than the PI controller.
Linear and non-linear dynamic models of a geared rotor-bearing system
NASA Technical Reports Server (NTRS)
Kahraman, Ahmet; Singh, Rajendra
1990-01-01
A three degree of freedom non-linear model of a geared rotor-bearing system with gear backlash and radial clearances in rolling element bearings is proposed here. This reduced order model can be used to describe the transverse-torsional motion of the system. It is justified by comparing the eigen solutions yielded by corresponding linear model with the finite element method results. Nature of nonlinearities in bearings is examined and two approximate nonlinear stiffness functions are proposed. These approximate bearing models are verified by comparing their frequency responses with the results given by the exact form of nonlinearity. The proposed nonlinear dynamic model of the geared rotor-bearing system can be used to investigate the dynamic behavior and chaos.
Non-Linear System Identification for Aeroelastic Systems with Application to Experimental Data
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
Representation and identification of a non-linear aeroelastic pitch-plunge system as a model of the NARMAX class is considered. A non-linear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (i) the outputs of the NARMAX model match closely those generated using continuous-time methods and (ii) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.
Soft tissue modelling through autowaves for surgery simulation.
Zhong, Yongmin; Shirinzadeh, Bijan; Alici, Gursel; Smith, Julian
2006-09-01
Modelling of soft tissue deformation is of great importance to virtual reality based surgery simulation. This paper presents a new methodology for simulation of soft tissue deformation by drawing an analogy between autowaves and soft tissue deformation. The potential energy stored in a soft tissue as a result of a deformation caused by an external force is propagated among mass points of the soft tissue by non-linear autowaves. The novelty of the methodology is that (i) autowave techniques are established to describe the potential energy distribution of a deformation for extrapolating internal forces, and (ii) non-linear materials are modelled with non-linear autowaves other than geometric non-linearity. Integration with a haptic device has been achieved to simulate soft tissue deformation with force feedback. The proposed methodology not only deals with large-range deformations, but also accommodates isotropic, anisotropic and inhomogeneous materials by simply changing diffusion coefficients.
Simpson, G; Fisher, C; Wright, D K
2001-01-01
Continuing earlier studies into the relationship between the residual limb, liner and socket in transtibial amputees, we describe a geometrically accurate non-linear model simulating the donning of a liner and then a socket. The socket is rigid and rectified and the liner is a polyurethane geltype which is accurately described using non-linear (Mooney-Rivlin) material properties. The soft tissue of the residual limb is modelled as homogeneous, non-linear and hyperelastic and the bone structure within the residual limb is taken as rigid. The work gives an indication of how the stress induced by the process of donning the rigid socket is redistributed by the liner. Ultimately we hope to understand how the liner design might be modified to reduce discomfort. The ANSYS finite element code, version 5.6 is used.
NASA Astrophysics Data System (ADS)
Desai, Priyanka Subhash
Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd star and linear melts. The star was carefully synthesized and characterized. We found massive failures of tube models to predict the terminal relaxation behavior of the star/linear blends. In addition, these blends were also tested against a coarse-grained slip-link model, the "Cluster Fixed Slip-link Model (CFSM)" of Schieber and coworkers. The CFSM with only two parameters gave excellent agreement with all experimental data for the blends.
Modelling acceptance of sunlight in high and low photovoltaic concentration
NASA Astrophysics Data System (ADS)
Leutz, Ralf
2014-09-01
A simple model incorporating linear radiation characteristics, along with the optical trains and geometrical concentration ratios of solar concentrators is presented with performance examples for optical trains of HCPV, LCPV and benchmark flat-plate PV.
Deconvolution of interferometric data using interior point iterative algorithms
NASA Astrophysics Data System (ADS)
Theys, C.; Lantéri, H.; Aime, C.
2016-09-01
We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.
The surface-induced spatial-temporal structures in confined binary alloys
NASA Astrophysics Data System (ADS)
Krasnyuk, Igor B.; Taranets, Roman M.; Chugunova, Marina
2014-12-01
This paper examines surface-induced ordering in confined binary alloys. The hyperbolic initial boundary value problem (IBVP) is used to describe a scenario of spatiotemporal ordering in a disordered phase for concentration of one component of binary alloy and order parameter with non-linear dynamic boundary conditions. This hyperbolic model consists of two coupled second order differential equations for order parameter and concentration. It also takes into account effects of the “memory” on the ordering of atoms and their densities in the alloy. The boundary conditions characterize surface velocities of order parameter and concentration changing which is due to surface (super)cooling on walls confining the binary alloy. It is shown that for large times there are three classes of dynamic non-linear boundary conditions which lead to three different types of attractor’s elements for the IBVP. Namely, the elements of attractor are the limit periodic simple shock waves with fronts of “discontinuities” Γ. If Γ is finite, then the attractor contains spatiotemporal functions of relaxation type. If Γ is infinite and countable then we observe the functions of pre-turbulent type. If Γ is infinite and uncountable then we obtain the functions of turbulent type.
Polarization Properties of A Broadband Multi-Moded Concentrator
NASA Technical Reports Server (NTRS)
Kogut, Alan; Fixsen, Dale J.; Hill, Robert S.
2015-01-01
We present the design and performance of a non-imaging concentrator for use in broad-band polarimetry at millimeter through submillimeter wavelengths. A rectangular geometry preserves the input polarization state as the concentrator couples f/2 incident optics to a 2pi sr detector. Measurements of the co-polar and cross-polar beams in both the few-mode and highly over-moded limits agree with a simple model based on mode truncation. The measured co-polar beam pattern is nearly independent of frequency in both linear polarizations. The cross-polar beam pattern is dominated by a uniform term corresponding to polarization efficiency 94%. After correcting for efficiency, the remaining cross-polar response is -18 dB.
Thermodynamic signatures for the existence of Dirac electrons in ZrTe 5
Nair, Nityan L.; Dumitrescu, Philipp T.; Channa, Sanyum; ...
2017-09-12
We combine transport, magnetization, and torque magnetometry measurements to investigate the electronic structure of ZrTe 5 and its evolution with temperature. At fields beyond the quantum limit, we observe a magnetization reversal from paramagnetic to diamagnetic response, which is characteristic of a Dirac semi-metal. We also observe a strong non-linearity in the magnetization that suggests the presence of additional low-lying carriers from other low-energy bands. Finally, we observe a striking sensitivity of the magnetic reversal to temperature that is not readily explained by simple band-structure models, but may be connected to a temperature dependent Lifshitz transition proposed to exist inmore » this material.« less
A nested observation and model approach to non linear groundwater surface water interactions.
NASA Astrophysics Data System (ADS)
van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.
2009-04-01
Surface water quality measurements in The Netherlands are scattered in time and space. Therefore, water quality status and its variations and trends are difficult to determine. In order to reach the water quality goals according to the European Water Framework Directive, we need to improve our understanding of the dynamics of surface water quality and the processes that affect it. In heavily drained lowland catchment groundwater influences the discharge towards the surface water network in many complex ways. Especially a strong seasonal contracting and expanding system of discharging ditches and streams affects discharge and solute transport. At a tube drained field site the tube drain flux and the combined flux of all other flow routes toward a stretch of 45 m of surface water have been measured for a year. Also the groundwater levels at various locations in the field and the discharge at two nested catchment scales have been monitored. The unique reaction of individual flow routes on rainfall events at the field site allowed us to separate the discharge at a 4 ha catchment and at a 6 km2 into flow route contributions. The results of this nested experimental setup combined with the results of a distributed hydrological model has lead to the formulation of a process model approach that focuses on the spatial variability of discharge generation driven by temporal and spatial variations in groundwater levels. The main idea of this approach is that discharge is not generated by catchment average storages or groundwater heads, but is mainly generated by points scale extremes i.e. extreme low permeability, extreme high groundwater heads or extreme low surface elevations, all leading to catchment discharge. We focused on describing the spatial extremes in point scale storages and this led to a simple and measurable expression that governs the non-linear groundwater surface water interaction. We will present the analysis of the field site data to demonstrate the potential of nested-scale, high frequency observations. The distributed hydrological model results will be used to show transient catchment scale relations between groundwater levels and discharges. These analyses lead to a simple expression that can describe catchment scale groundwater surface water interactions.
NASA Astrophysics Data System (ADS)
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-05-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
NASA Astrophysics Data System (ADS)
Wawerzinek, B.; Ritter, J. R. R.; Roy, C.
2013-08-01
We analyse travel times of shear waves, which were recorded at the MAGNUS network, to determine the 3D shear wave velocity (vS) structure underneath Southern Scandinavia. The travel time residuals are corrected for the known crustal structure of Southern Norway and weighted to account for data quality and pick uncertainties. The resulting residual pattern of subvertically incident waves is very uniform and simple. It shows delayed arrivals underneath Southern Norway compared to fast arrivals underneath the Oslo Graben and the Baltic Shield. The 3D upper mantle vS structure underneath the station network is determined by performing non-linear travel time tomography. As expected from the residual pattern the resulting tomographic model shows a simple and continuous vS perturbation pattern: a negative vS anomaly is visible underneath Southern Norway relative to the Baltic Shield in the east with a contrast of up to 4% vS and a sharp W-E dipping transition zone. Reconstruction tests reveal besides vertical smearing a good lateral reconstruction of the dipping vS transition zone and suggest that a deep-seated anomaly at 330-410 km depth is real and not an inversion artefact. The upper part of the reduced vS anomaly underneath Southern Norway (down to 250 km depth) might be due to an increase in lithospheric thickness from the Caledonian Southern Scandes in the west towards the Proterozoic Baltic Shield in Sweden in the east. The deeper-seated negative vS anomaly (330-410 km depth) could be caused by a temperature anomaly possibly combined with effects due to fluids or hydrous minerals. The determined simple 3D vS structure underneath Southern Scandinavia indicates that mantle processes might influence and contribute to a Neogene uplift of Southern Norway.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abolfath, R; Bronk, L; Titt, U.
2016-06-15
Purpose: Recent clonogenic cell survival and γH2AX studies suggest proton relative biological effectiveness (RBE) may be a non-linear function of linear energy transfer (LET) in the distal edge of the Bragg peak and beyond. We sought to develop a multiscale model to account for non-linear response phenomena to aid in the optimization of intensity-modulated proton therapy. Methods: The model is based on first-principle simulations of proton track structures, including secondary ions, and an analytical derivation of the dependence on particle LET of the linear-quadratic (LQ) model parameters α and β. The derived formulas are an extension of the microdosimetric kineticmore » (MK) model that captures dissipative track structures and non-Poissonian distribution of DNA damage at the distal edge of the Bragg peak and beyond. Monte Carlo simulations were performed to confirm the non-linear dose-response characteristics arising from the non-Poisson distribution of initial DNA damage. Results: In contrast to low LET segments of the proton depth dose, from the beam entrance to the Bragg peak, strong deviations from non-dissipative track structures and Poisson distribution in the ionization events in the Bragg peak distal edge govern the non-linear cell response and result in the transformation α=(1+c-1 L) α-x+2(c-0 L+c-2 L^2 )(1+c-1 L) β-x and β=(1+c-1 L)^2 β-x. Here L is the charged particle LET, and c-0,c-1, and c-2 are functions of microscopic parameters and can be served as fitting parameters to the cell-survival data. In the low LET limit c-1, and c-2 are negligible hence the linear model proposed and used by Wilkins-Oelfke for the proton treatment planning system can be retrieved. The present model fits well the recent clonogenic survival data measured recently in our group in MDACC. Conclusion: The present hybrid method provides higher accuracy in calculating the RBE-weighted dose in the target and normal tissues.« less
Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F
2012-01-01
Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Nonlinear Dynamic Models in Advanced Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.
High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters
2017-04-22
signatures which can be used for direct, non -invasive, comparison with experimental diagnostics can be produced. This research will be directly... experimental campaign is critical to developing general design philosophies for low-power plasmoid formation, the complexity of non -linear plasma processes...advanced space propulsion. The work consists of numerical method development, physical model development, and systematic studies of the non -linear
Colbourn, E A; Roskilly, S J; Rowe, R C; York, P
2011-10-09
This study has investigated the utility and potential advantages of gene expression programming (GEP)--a new development in evolutionary computing for modelling data and automatically generating equations that describe the cause-and-effect relationships in a system--to four types of pharmaceutical formulation and compared the models with those generated by neural networks, a technique now widely used in the formulation development. Both methods were capable of discovering subtle and non-linear relationships within the data, with no requirement from the user to specify the functional forms that should be used. Although the neural networks rapidly developed models with higher values for the ANOVA R(2) these were black box and provided little insight into the key relationships. However, GEP, although significantly slower at developing models, generated relatively simple equations describing the relationships that could be interpreted directly. The results indicate that GEP can be considered an effective and efficient modelling technique for formulation data. Copyright © 2011 Elsevier B.V. All rights reserved.
Non-contact thrust stand calibration method for repetitively pulsed electric thrusters.
Wong, Andrea R; Toftul, Alexandra; Polzin, Kurt A; Pearson, J Boise
2012-02-01
A thrust stand calibration technique for use in testing repetitively pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoid to produce a pulsed magnetic field that acts against a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasi-steady average deflection of the thrust stand arm away from the unforced or "zero" position can be related to the average applied force through a simple linear Hooke's law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other. The overall error on the linear regression fit used to determine the calibration coefficient was roughly 1%.
Predicting protein decomposition: the case of aspartic-acid racemization kinetics.
Collins, M J; Waite, E R; van Duin, A C
1999-01-01
The increase in proportion of the non-biological (D-) isomer of aspartic acid (Asp) relative to the L-isomer has been widely used in archaeology and geochemistry as a tool for dating. the method has proved controversial, particularly when used for bones. The non-linear kinetics of Asp racemization have prompted a number of suggestions as to the underlying mechanism(s) and have led to the use of mathematical transformations which linearize the increase in D-Asp with respect to time. Using one example, a suggestion that the initial rapid phase of Asp racemization is due to a contribution from asparagine (Asn), we demonstrate how a simple model of the degradation and racemization of Asn can be used to predict the observed kinetics. A more complex model of peptide bound Asx (Asn + Asp) racemization, which occurs via the formation of a cyclic succinimide (Asu), can be used to correctly predict Asx racemization kinetics in proteins at high temperatures (95-140 degrees C). The model fails to predict racemization kinetics in dentine collagen at 37 degrees C. The reason for this is that Asu formation is highly conformation dependent and is predicted to occur extremely slowly in triple helical collagen. As conformation strongly influences the rate of Asu formation and hence Asx racemization, the use of extrapolation from high temperatures to estimate racemization kinetics of Asx in proteins below their denaturation temperature is called into question. In the case of archaeological bone, we argue that the D:L ratio of Asx reflects the proportion of non-helical to helical collagen, overlain by the effects of leaching of more soluble (and conformationally unconstrained) peptides. Thus, racemization kinetics in bone are potentially unpredictable, and the proposed use of Asx racemization to estimate the extent of DNA depurination in archaeological bones is challenged. PMID:10091247
Nitrogen in the Baltic Sea--policy implications of stock effects.
Hart, Rob; Brady, Mark
2002-09-01
We develop an optimal control model for cost-effective management of pollution, including two state variables, pollution stock and ecosystem quality. We apply it to Baltic Sea pollution by nitrogen leachates from agriculture. We present a sophisticated, non-linear model of leaching abatement costs, and a simple model of nitrogen stocks. We find that significant abatement is achievable at reasonable cost, despite the countervailing effects of existing agricultural policies such as price supports. Successful abatement should lead to lower nitrogen stocks in the sea in 5 years or less. However, the rate of ecosystem recovery is less certain. The results are highly dependent on the rate of self-cleaning of the Baltic Sea, and less so on the discount rate. Choice of target has a radical effect on the abatement path chosen. Cost-effectiveness demands such a choice, and should therefore be used with care when stock effects are present.
A Membrane Model from Implicit Elasticity Theory
Freed, A. D.; Liao, J.; Einstein, D. R.
2014-01-01
A Fungean solid is derived for membranous materials as a body defined by isotropic response functions whose mathematical structure is that of a Hookean solid where the elastic constants are replaced by functions of state derived from an implicit, thermodynamic, internal-energy function. The theory utilizes Biot’s (1939) definitions for stress and strain that, in 1-dimension, are the stress/strain measures adopted by Fung (1967) when he postulated what is now known as Fung’s law. Our Fungean membrane model is parameterized against a biaxial data set acquired from a porcine pleural membrane subjected to three, sequential, proportional, planar extensions. These data support an isotropic/deviatoric split in the stress and strain-rate hypothesized by our theory. These data also demonstrate that the material response is highly non-linear but, otherwise, mechanically isotropic. These data are described reasonably well by our otherwise simple, four-parameter, material model. PMID:24282079
Minimal string theories and integrable hierarchies
NASA Astrophysics Data System (ADS)
Iyer, Ramakrishnan
Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non-perturbative definition for the first time. Notably, we discover that the Painleve IV equation plays a key role in organizing the string theory physics, joining its siblings, Painleve I and II, whose roles have previously been identified in this minimal string context. We then present evidence that the conjectured type II theories have smooth non-perturbative solutions, connecting two perturbative asymptotic regimes, in a 't Hooft limit. Our technique also demonstrates evidence for new minimal string theories that are not apparent in a perturbative analysis.
Testing hypotheses for differences between linear regression lines
Stanley J. Zarnoch
2009-01-01
Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...
Predicting musically induced emotions from physiological inputs: linear and neural network models.
Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M
2013-01-01
Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.
NASA Astrophysics Data System (ADS)
Glass, Alexis; Fukudome, Kimitoshi
2004-12-01
A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.
ERIC Educational Resources Information Center
van der Linden, Wim J.
Latent class models for mastery testing differ from continuum models in that they do not postulate a latent mastery continuum but conceive mastery and non-mastery as two latent classes, each characterized by different probabilities of success. Several researchers use a simple latent class model that is basically a simultaneous application of the…
Adding flexibility to the search for robust portfolios in non-linear water resource planning
NASA Astrophysics Data System (ADS)
Tomlinson, James; Harou, Julien
2017-04-01
To date robust optimisation of water supply systems has sought to find portfolios or strategies that are robust to a range of uncertainties or scenarios. The search for a single portfolio that is robust in all scenarios is necessarily suboptimal compared to portfolios optimised for a single scenario deterministic future. By contrast establishing a separate portfolio for each future scenario is unhelpful to the planner who must make a single decision today under deep uncertainty. In this work we show that a middle ground is possible by allowing a small number of different portfolios to be found that are each robust to a different subset of the global scenarios. We use evolutionary algorithms and a simple water resource system model to demonstrate this approach. The primary contribution is to demonstrate that flexibility can be added to the search for portfolios, in complex non-linear systems, at the expense of complete robustness across all future scenarios. In this context we define flexibility as the ability to design a portfolio in which some decisions are delayed, but those decisions that are not delayed are themselves shown to be robust to the future. We recognise that some decisions in our portfolio are more important than others. An adaptive portfolio is found by allowing no flexibility for these near-term "important" decisions, but maintaining flexibility in the remaining longer term decisions. In this sense we create an effective 2-stage decision process for a non-linear water resource supply system. We show how this reduces a measure of regret versus the inflexible robust solution for the same system.
Analysis of Nonlinear Dynamics in Linear Compressors Driven by Linear Motors
NASA Astrophysics Data System (ADS)
Chen, Liangyuan
2018-03-01
The analysis of dynamic characteristics of the mechatronics system is of great significance for the linear motor design and control. Steady-state nonlinear response characteristics of a linear compressor are investigated theoretically based on the linearized and nonlinear models. First, the influence factors considering the nonlinear gas force load were analyzed. Then, a simple linearized model was set up to analyze the influence on the stroke and resonance frequency. Finally, the nonlinear model was set up to analyze the effects of piston mass, spring stiffness, driving force as an example of design parameter variation. The simulating results show that the stroke can be obtained by adjusting the excitation amplitude, frequency and other adjustments, the equilibrium position can be adjusted by adjusting the DC input, and to make the more efficient operation, the operating frequency must always equal to the resonance frequency.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
A penalized framework for distributed lag non-linear models.
Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G
2017-09-01
Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Plant uptake of elements in soil and pore water: field observations versus model assumptions.
Raguž, Veronika; Jarsjö, Jerker; Grolander, Sara; Lindborg, Regina; Avila, Rodolfo
2013-09-15
Contaminant concentrations in various edible plant parts transfer hazardous substances from polluted areas to animals and humans. Thus, the accurate prediction of plant uptake of elements is of significant importance. The processes involved contain many interacting factors and are, as such, complex. In contrast, the most common way to currently quantify element transfer from soils into plants is relatively simple, using an empirical soil-to-plant transfer factor (TF). This practice is based on theoretical assumptions that have been previously shown to not generally be valid. Using field data on concentrations of 61 basic elements in spring barley, soil and pore water at four agricultural sites in mid-eastern Sweden, we quantify element-specific TFs. Our aim is to investigate to which extent observed element-specific uptake is consistent with TF model assumptions and to which extent TF's can be used to predict observed differences in concentrations between different plant parts (root, stem and ear). Results show that for most elements, plant-ear concentrations are not linearly related to bulk soil concentrations, which is congruent with previous studies. This behaviour violates a basic TF model assumption of linearity. However, substantially better linear correlations are found when weighted average element concentrations in whole plants are used for TF estimation. The highest number of linearly-behaving elements was found when relating average plant concentrations to soil pore-water concentrations. In contrast to other elements, essential elements (micronutrients and macronutrients) exhibited relatively small differences in concentration between different plant parts. Generally, the TF model was shown to work reasonably well for micronutrients, whereas it did not for macronutrients. The results also suggest that plant uptake of elements from sources other than the soil compartment (e.g. from air) may be non-negligible. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tseng, Huan-Chang; Wu, Jiann-Shing; Chang, Rong-Yeu
2010-04-28
A small amplitude oscillatory shear flows with the classic characteristic of a phase shift when using non-equilibrium molecular dynamics simulations for n-hexadecane fluids. In a suitable range of strain amplitude, the fluid possesses significant linear viscoelastic behavior. Non-linear viscoelastic behavior of strain thinning, which means the dynamic modulus monotonously decreased with increasing strain amplitudes, was found at extreme strain amplitudes. Under isobaric conditions, different temperatures strongly affected the range of linear viscoelasticity and the slope of strain thinning. The fluid's phase states, containing solid-, liquid-, and gel-like states, can be distinguished through a criterion of the viscoelastic spectrum. As a result, a particular condition for the viscoelastic behavior of n-hexadecane molecules approaching that of the Rouse chain was obtained. Besides, more importantly, evidence of thermorheologically simple materials was presented in which the relaxation modulus obeys the time-temperature superposition principle. Therefore, using shift factors from the time-temperature superposition principle, the estimated Arrhenius flow activation energy was in good agreement with related experimental values. Furthermore, one relaxation modulus master curve well exhibited both transition and terminal zones. Especially regarding non-equilibrium thermodynamic states, variations in the density, with respect to frequencies, were revealed.
Forecasting Pell Program Applications Using Structural Aggregate Models.
ERIC Educational Resources Information Center
Cavin, Edward S.
1995-01-01
Demand for Pell Grant financial aid has become difficult to predict when using the current microsimulation model. This paper proposes an alternative model that uses aggregate data (based on individuals' microlevel decisions and macrodata on family incomes, college costs, and opportunity wages) and avoids some limitations of simple linear models.…
Synthetic data sets for the identification of key ingredients for RNA-seq differential analysis.
Rigaill, Guillem; Balzergue, Sandrine; Brunaud, Véronique; Blondet, Eddy; Rau, Andrea; Rogier, Odile; Caius, José; Maugis-Rabusseau, Cathy; Soubigou-Taconnat, Ludivine; Aubourg, Sébastien; Lurin, Claire; Martin-Magniette, Marie-Laure; Delannoy, Etienne
2018-01-01
Numerous statistical pipelines are now available for the differential analysis of gene expression measured with RNA-sequencing technology. Most of them are based on similar statistical frameworks after normalization, differing primarily in the choice of data distribution, mean and variance estimation strategy and data filtering. We propose an evaluation of the impact of these choices when few biological replicates are available through the use of synthetic data sets. This framework is based on real data sets and allows the exploration of various scenarios differing in the proportion of non-differentially expressed genes. Hence, it provides an evaluation of the key ingredients of the differential analysis, free of the biases associated with the simulation of data using parametric models. Our results show the relevance of a proper modeling of the mean by using linear or generalized linear modeling. Once the mean is properly modeled, the impact of the other parameters on the performance of the test is much less important. Finally, we propose to use the simple visualization of the raw P-value histogram as a practical evaluation criterion of the performance of differential analysis methods on real data sets. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François
2018-04-01
Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.
NASA Astrophysics Data System (ADS)
Mitra, Anindita; Li, Y.-F.; Shimizu, T.; Klämpfl, Tobias; Zimmermann, J. L.; Morfill, G. E.
2012-10-01
Cold Atmospheric Plasma (CAP) is a fast, low cost, simple, easy to handle technology for biological application. Our group has developed a number of different CAP devices using the microwave technology and the surface micro discharge (SMD) technology. In this study, FlatPlaSter2.0 at different time intervals (0.5 to 5 min) is used for microbial inactivation. There is a continuous demand for deactivation of microorganisms associated with raw foods/seeds without loosing their properties. This research focuses on the kinetics of CAP induced microbial inactivation of naturally growing surface microorganisms on seeds. The data were assessed for log- linear and non-log-linear models for survivor curves as a function of time. The Weibull model showed the best fitting performance of the data. No shoulder and tail was observed. The models are focused in terms of the number of log cycles reduction rather than on classical D-values with statistical measurements. The viability of seeds was not affected for CAP treatment times up to 3 min with our device. The optimum result was observed at 1 min with increased percentage of germination from 60.83% to 89.16% compared to the control. This result suggests the advantage and promising role of CAP in food industry.
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Data mining for the analysis of hippocampal zones in Alzheimer's disease
NASA Astrophysics Data System (ADS)
Ovando Vázquez, Cesaré M.
2012-02-01
In this work, a methodology to classify people with Alzheimer's Disease (AD), Healthy Controls (HC) and people with Mild Cognitive Impairment (MCI) is presented. This methodology consists of an ensemble of Support Vector Machines (SVM) with the hippocampal boxes (HB) as input data, these hippocampal zones are taken from Magnetic Resonance (MRI) and Positron Emission Tomography (PET) images. Two ways of constructing this ensemble are presented, the first consists of linear SVM models and the second of non-linear SVM models. Results demonstrate that the linear models classify HBs more accurately than the non-linear models between HC and MCI and that there are no differences between HC and AD.
Brane SUSY breaking and the gravitino mass
NASA Astrophysics Data System (ADS)
Kitazawa, Noriaki
2018-04-01
Supergravity models with spontaneously broken supersymmetry have been widely investigated over the years, together with some notable non-linear limits. Although in these models the gravitino becomes naturally massive absorbing the degrees of freedom of a Nambu-Goldstone fermion, there are cases in which the naive counting of degrees of freedom does not apply, in particular because of the absence of explicit gravitino mass terms in unitary gauge. The corresponding models require non-trivial de Sitter-like backgrounds, and it becomes of interest to clarify the fate of their Nambu-Goldstone modes. We elaborate on the fact that these non-trivial backgrounds can accommodate, consistently, gravitino fields carrying a number of degrees of freedom that is intermediate between those of massless and massive fields in a flat spacetime. For instance, in a simple supergravity model of this type with de Sitter background, the overall degrees of freedom of gravitino are as many as for a massive spin-3/2 field in flat spacetime, while the gravitino remains massless in the sense that it undergoes null-cone propagation in the stereographic picture. On the other hand, in the ten-dimensional USp(32) Type I Sugimoto model with "brane SUSY breaking", which requires a more complicated background, the degrees of freedom of gravitino are half as many of those of a massive one, and yet it somehow behaves again as a massless one.
Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
2013-01-01
Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.
Quantum networks in divergence-free circuit QED
NASA Astrophysics Data System (ADS)
Parra-Rodriguez, A.; Rico, E.; Solano, E.; Egusquiza, I. L.
2018-04-01
Superconducting circuits are one of the leading quantum platforms for quantum technologies. With growing system complexity, it is of crucial importance to develop scalable circuit models that contain the minimum information required to predict the behaviour of the physical system. Based on microwave engineering methods, divergent and non-divergent Hamiltonian models in circuit quantum electrodynamics have been proposed to explain the dynamics of superconducting quantum networks coupled to infinite-dimensional systems, such as transmission lines and general impedance environments. Here, we study systematically common linear coupling configurations between networks and infinite-dimensional systems. The main result is that the simple Lagrangian models for these configurations present an intrinsic natural length that provides a natural ultraviolet cutoff. This length is due to the unavoidable dressing of the environment modes by the network. In this manner, the coupling parameters between their components correctly manifest their natural decoupling at high frequencies. Furthermore, we show the requirements to correctly separate infinite-dimensional coupled systems in local bases. We also compare our analytical results with other analytical and approximate methods available in the literature. Finally, we propose several applications of these general methods to analogue quantum simulation of multi-spin-boson models in non-perturbative coupling regimes.
Stability and Interaction of Coherent Structure in Supersonic Reactive Wakes
NASA Technical Reports Server (NTRS)
Menon, Suresh
1983-01-01
A theoretical formulation and analysis is presented for a study of the stability and interaction of coherent structure in reacting free shear layers. The physical problem under investigation is a premixed hydrogen-oxygen reacting shear layer in the wake of a thin flat plate. The coherent structure is modeled as a periodic disturbance and its stability is determined by the application of linearized hydrodynamic stability theory which results in a generalized eigenvalue problem for reactive flows. Detailed stability analysis of the reactive wake for neutral, symmetrical and antisymmetrical disturbance is presented. Reactive stability criteria is shown to be quite different from classical non-reactive stability. The interaction between the mean flow, coherent structure and fine-scale turbulence is theoretically formulated using the von-Kaman integral technique. Both time-averaging and conditional phase averaging are necessary to separate the three types of motion. The resulting integro-differential equations can then be solved subject to initial conditions with appropriate shape functions. In the laminar flow transition region of interest, the spatial interaction between the mean motion and coherent structure is calculated for both non-reactive and reactive conditions and compared with experimental data wherever available. The fine-scale turbulent motion determined by the application of integral analysis to the fluctuation equations. Since at present this turbulence model is still untested, turbulence is modeled in the interaction problem by a simple algebraic eddy viscosity model. The applicability of the integral turbulence model formulated here is studied parametrically by integrating these equations for the simple case of self-similar mean motion with assumed shape functions. The effect of the motion of the coherent structure is studied and very good agreement is obtained with previous experimental and theoretical works for non-reactive flow. For the reactive case, lack of experimental data made direct comparison difficult. It was determined that the growth rate of the disturbance amplitude is lower for reactive case. The results indicate that the reactive flow stability is in qualitative agreement with experimental observation.
Model predictive control of non-linear systems over networks with data quantization and packet loss.
Yu, Jimin; Nan, Liangsheng; Tang, Xiaoming; Wang, Ping
2015-11-01
This paper studies the approach of model predictive control (MPC) for the non-linear systems under networked environment where both data quantization and packet loss may occur. The non-linear controlled plant in the networked control system (NCS) is represented by a Tagaki-Sugeno (T-S) model. The sensed data and control signal are quantized in both links and described as sector bound uncertainties by applying sector bound approach. Then, the quantized data are transmitted in the communication networks and may suffer from the effect of packet losses, which are modeled as Bernoulli process. A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.
2011-01-01
Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Tackling non-linearities with the effective field theory of dark energy and modified gravity
NASA Astrophysics Data System (ADS)
Frusciante, Noemi; Papadomanolakis, Georgios
2017-12-01
We present the extension of the effective field theory framework to the mildly non-linear scales. The effective field theory approach has been successfully applied to the late time cosmic acceleration phenomenon and it has been shown to be a powerful method to obtain predictions about cosmological observables on linear scales. However, mildly non-linear scales need to be consistently considered when testing gravity theories because a large part of the data comes from those scales. Thus, non-linear corrections to predictions on observables coming from the linear analysis can help in discriminating among different gravity theories. We proceed firstly by identifying the necessary operators which need to be included in the effective field theory Lagrangian in order to go beyond the linear order in perturbations and then we construct the corresponding non-linear action. Moreover, we present the complete recipe to map any single field dark energy and modified gravity models into the non-linear effective field theory framework by considering a general action in the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we proceed to map the beyond-Horndeski theory and low-energy Hořava gravity into the effective field theory formalism. As a final step we derived the 4th order action in term of the curvature perturbation. This allowed us to identify the non-linear contributions coming from the linear order perturbations which at the next order act like source terms. Moreover, we confirm that the stability requirements, ensuring the positivity of the kinetic term and the speed of propagation for scalar mode, are automatically satisfied once the viability of the theory is demanded at linear level. The approach we present here will allow to construct, in a model independent way, all the relevant predictions on observables at mildly non-linear scales.